00:00:00.001 Started by upstream project "autotest-per-patch" build number 132722 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.922 The recommended git tool is: git 00:00:00.923 using credential 00000000-0000-0000-0000-000000000002 00:00:00.925 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.936 Fetching changes from the remote Git repository 00:00:00.939 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.950 Using shallow fetch with depth 1 00:00:00.950 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.950 > git --version # timeout=10 00:00:00.962 > git --version # 'git version 2.39.2' 00:00:00.962 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.972 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.972 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.591 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.603 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.615 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.615 > git config core.sparsecheckout # timeout=10 00:00:06.626 > git read-tree -mu HEAD # timeout=10 00:00:06.640 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.659 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.659 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.728 [Pipeline] Start of Pipeline 00:00:06.737 [Pipeline] library 00:00:06.739 Loading library shm_lib@master 00:00:06.739 Library shm_lib@master is cached. Copying from home. 00:00:06.752 [Pipeline] node 00:00:06.774 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.780 [Pipeline] { 00:00:06.810 [Pipeline] catchError 00:00:06.812 [Pipeline] { 00:00:06.821 [Pipeline] wrap 00:00:06.827 [Pipeline] { 00:00:06.833 [Pipeline] stage 00:00:06.834 [Pipeline] { (Prologue) 00:00:07.070 [Pipeline] sh 00:00:07.357 + logger -p user.info -t JENKINS-CI 00:00:07.376 [Pipeline] echo 00:00:07.378 Node: CYP9 00:00:07.387 [Pipeline] sh 00:00:07.692 [Pipeline] setCustomBuildProperty 00:00:07.705 [Pipeline] echo 00:00:07.706 Cleanup processes 00:00:07.712 [Pipeline] sh 00:00:08.001 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.001 2498541 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.016 [Pipeline] sh 00:00:08.304 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.304 ++ grep -v 'sudo pgrep' 00:00:08.304 ++ awk '{print $1}' 00:00:08.304 + sudo kill -9 00:00:08.304 + true 00:00:08.319 [Pipeline] cleanWs 00:00:08.328 [WS-CLEANUP] Deleting project workspace... 00:00:08.328 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.335 [WS-CLEANUP] done 00:00:08.340 [Pipeline] setCustomBuildProperty 00:00:08.356 [Pipeline] sh 00:00:08.643 + sudo git config --global --replace-all safe.directory '*' 00:00:08.744 [Pipeline] httpRequest 00:00:09.149 [Pipeline] echo 00:00:09.151 Sorcerer 10.211.164.101 is alive 00:00:09.161 [Pipeline] retry 00:00:09.167 [Pipeline] { 00:00:09.182 [Pipeline] httpRequest 00:00:09.187 HttpMethod: GET 00:00:09.188 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.188 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.201 Response Code: HTTP/1.1 200 OK 00:00:09.201 Success: Status code 200 is in the accepted range: 200,404 00:00:09.202 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.123 [Pipeline] } 00:00:12.142 [Pipeline] // retry 00:00:12.150 [Pipeline] sh 00:00:12.438 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.456 [Pipeline] httpRequest 00:00:13.343 [Pipeline] echo 00:00:13.345 Sorcerer 10.211.164.101 is alive 00:00:13.355 [Pipeline] retry 00:00:13.357 [Pipeline] { 00:00:13.372 [Pipeline] httpRequest 00:00:13.377 HttpMethod: GET 00:00:13.377 URL: http://10.211.164.101/packages/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:00:13.378 Sending request to url: http://10.211.164.101/packages/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:00:13.401 Response Code: HTTP/1.1 200 OK 00:00:13.401 Success: Status code 200 is in the accepted range: 200,404 00:00:13.402 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:05:52.746 [Pipeline] } 00:05:52.764 [Pipeline] // retry 00:05:52.772 [Pipeline] sh 00:05:53.061 + tar --no-same-owner -xf spdk_6696ebaaea03ecbe501fdf56eaaaa813afbd3409.tar.gz 00:05:56.382 [Pipeline] sh 00:05:56.671 + git -C spdk log --oneline -n5 00:05:56.671 6696ebaae util: keep track of nested child fd_groups 00:05:56.671 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:05:56.671 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:05:56.671 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:05:56.671 e2dfdf06c accel/mlx5: Register post_poller handler 00:05:56.681 [Pipeline] } 00:05:56.691 [Pipeline] // stage 00:05:56.698 [Pipeline] stage 00:05:56.700 [Pipeline] { (Prepare) 00:05:56.712 [Pipeline] writeFile 00:05:56.723 [Pipeline] sh 00:05:57.007 + logger -p user.info -t JENKINS-CI 00:05:57.022 [Pipeline] sh 00:05:57.335 + logger -p user.info -t JENKINS-CI 00:05:57.350 [Pipeline] sh 00:05:57.643 + cat autorun-spdk.conf 00:05:57.643 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:57.643 SPDK_TEST_NVMF=1 00:05:57.643 SPDK_TEST_NVME_CLI=1 00:05:57.643 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:57.643 SPDK_TEST_NVMF_NICS=e810 00:05:57.643 SPDK_TEST_VFIOUSER=1 00:05:57.643 SPDK_RUN_UBSAN=1 00:05:57.643 NET_TYPE=phy 00:05:57.651 RUN_NIGHTLY=0 00:05:57.656 [Pipeline] readFile 00:05:57.684 [Pipeline] withEnv 00:05:57.686 [Pipeline] { 00:05:57.699 [Pipeline] sh 00:05:57.989 + set -ex 00:05:57.989 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:57.989 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:57.989 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:57.989 ++ SPDK_TEST_NVMF=1 00:05:57.989 ++ SPDK_TEST_NVME_CLI=1 00:05:57.989 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:57.989 ++ SPDK_TEST_NVMF_NICS=e810 00:05:57.989 ++ SPDK_TEST_VFIOUSER=1 00:05:57.989 ++ SPDK_RUN_UBSAN=1 00:05:57.989 ++ NET_TYPE=phy 00:05:57.989 ++ RUN_NIGHTLY=0 00:05:57.989 + case $SPDK_TEST_NVMF_NICS in 00:05:57.989 + DRIVERS=ice 00:05:57.989 + [[ tcp == \r\d\m\a ]] 00:05:57.989 + [[ -n ice ]] 00:05:57.989 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:57.989 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:57.989 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:57.989 rmmod: ERROR: Module irdma is not currently loaded 00:05:57.989 rmmod: ERROR: Module i40iw is not currently loaded 00:05:57.989 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:57.989 + true 00:05:57.989 + for D in $DRIVERS 00:05:57.989 + sudo modprobe ice 00:05:57.989 + exit 0 00:05:58.000 [Pipeline] } 00:05:58.017 [Pipeline] // withEnv 00:05:58.022 [Pipeline] } 00:05:58.037 [Pipeline] // stage 00:05:58.047 [Pipeline] catchError 00:05:58.048 [Pipeline] { 00:05:58.063 [Pipeline] timeout 00:05:58.063 Timeout set to expire in 1 hr 0 min 00:05:58.065 [Pipeline] { 00:05:58.079 [Pipeline] stage 00:05:58.081 [Pipeline] { (Tests) 00:05:58.096 [Pipeline] sh 00:05:58.388 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:58.388 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:58.388 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:58.388 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:58.388 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.388 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:58.388 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:58.388 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:58.388 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:58.388 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:58.388 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:58.388 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:58.388 + source /etc/os-release 00:05:58.388 ++ NAME='Fedora Linux' 00:05:58.388 ++ VERSION='39 (Cloud Edition)' 00:05:58.388 ++ ID=fedora 00:05:58.388 ++ VERSION_ID=39 00:05:58.388 ++ VERSION_CODENAME= 00:05:58.388 ++ PLATFORM_ID=platform:f39 00:05:58.388 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:58.388 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:58.388 ++ LOGO=fedora-logo-icon 00:05:58.388 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:58.388 ++ HOME_URL=https://fedoraproject.org/ 00:05:58.388 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:58.388 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:58.388 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:58.388 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:58.388 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:58.388 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:58.388 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:58.388 ++ SUPPORT_END=2024-11-12 00:05:58.388 ++ VARIANT='Cloud Edition' 00:05:58.388 ++ VARIANT_ID=cloud 00:05:58.388 + uname -a 00:05:58.388 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:58.388 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:01.694 Hugepages 00:06:01.694 node hugesize free / total 00:06:01.694 node0 1048576kB 0 / 0 00:06:01.694 node0 2048kB 0 / 0 00:06:01.694 node1 1048576kB 0 / 0 00:06:01.694 node1 2048kB 0 / 0 00:06:01.694 00:06:01.694 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:01.694 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:01.694 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:01.694 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:01.694 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:01.694 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:01.694 + rm -f /tmp/spdk-ld-path 00:06:01.694 + source autorun-spdk.conf 00:06:01.694 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:01.694 ++ SPDK_TEST_NVMF=1 00:06:01.694 ++ SPDK_TEST_NVME_CLI=1 00:06:01.694 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:01.694 ++ SPDK_TEST_NVMF_NICS=e810 00:06:01.694 ++ SPDK_TEST_VFIOUSER=1 00:06:01.694 ++ SPDK_RUN_UBSAN=1 00:06:01.694 ++ NET_TYPE=phy 00:06:01.694 ++ RUN_NIGHTLY=0 00:06:01.694 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:01.694 + [[ -n '' ]] 00:06:01.694 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.694 + for M in /var/spdk/build-*-manifest.txt 00:06:01.694 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:01.695 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:01.695 + for M in /var/spdk/build-*-manifest.txt 00:06:01.695 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:01.695 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:01.695 + for M in /var/spdk/build-*-manifest.txt 00:06:01.695 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:01.695 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:01.695 ++ uname 00:06:01.695 + [[ Linux == \L\i\n\u\x ]] 00:06:01.695 + sudo dmesg -T 00:06:01.695 + sudo dmesg --clear 00:06:01.695 + dmesg_pid=2500673 00:06:01.695 + [[ Fedora Linux == FreeBSD ]] 00:06:01.695 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:01.695 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:01.695 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:01.695 + [[ -x /usr/src/fio-static/fio ]] 00:06:01.695 + export FIO_BIN=/usr/src/fio-static/fio 00:06:01.695 + FIO_BIN=/usr/src/fio-static/fio 00:06:01.695 + sudo dmesg -Tw 00:06:01.695 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:01.695 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:01.695 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:01.695 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:01.695 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:01.695 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:01.695 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:01.695 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:01.695 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:01.695 13:59:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:01.695 13:59:50 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:01.695 13:59:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:01.695 13:59:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:01.695 13:59:50 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:01.957 13:59:50 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:01.957 13:59:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.957 13:59:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:01.957 13:59:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:01.957 13:59:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.957 13:59:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.957 13:59:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.957 13:59:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.957 13:59:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.957 13:59:50 -- paths/export.sh@5 -- $ export PATH 00:06:01.957 13:59:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.957 13:59:50 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:01.957 13:59:50 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:01.957 13:59:50 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733489990.XXXXXX 00:06:01.958 13:59:50 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733489990.uh5GHQ 00:06:01.958 13:59:50 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:01.958 13:59:50 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:01.958 13:59:50 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:01.958 13:59:50 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:01.958 13:59:50 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:01.958 13:59:50 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:01.958 13:59:50 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:01.958 13:59:50 -- common/autotest_common.sh@10 -- $ set +x 00:06:01.958 13:59:50 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:01.958 13:59:50 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:01.958 13:59:50 -- pm/common@17 -- $ local monitor 00:06:01.958 13:59:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.958 13:59:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.958 13:59:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.958 13:59:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:01.958 13:59:50 -- pm/common@21 -- $ date +%s 00:06:01.958 13:59:50 -- pm/common@25 -- $ sleep 1 00:06:01.958 13:59:50 -- pm/common@21 -- $ date +%s 00:06:01.958 13:59:50 -- pm/common@21 -- $ date +%s 00:06:01.958 13:59:50 -- pm/common@21 -- $ date +%s 00:06:01.958 13:59:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733489990 00:06:01.958 13:59:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733489990 00:06:01.958 13:59:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733489990 00:06:01.958 13:59:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733489990 00:06:01.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733489990_collect-vmstat.pm.log 00:06:01.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733489990_collect-cpu-load.pm.log 00:06:01.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733489990_collect-cpu-temp.pm.log 00:06:01.958 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733489990_collect-bmc-pm.bmc.pm.log 00:06:02.901 13:59:51 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:02.901 13:59:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:02.901 13:59:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:02.901 13:59:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:02.901 13:59:51 -- spdk/autobuild.sh@16 -- $ date -u 00:06:02.901 Fri Dec 6 12:59:51 PM UTC 2024 00:06:02.901 13:59:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:02.901 v25.01-pre-304-g6696ebaae 00:06:02.901 13:59:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:02.901 13:59:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:02.901 13:59:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:02.901 13:59:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:02.901 13:59:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:02.901 13:59:51 -- common/autotest_common.sh@10 -- $ set +x 00:06:02.901 ************************************ 00:06:02.901 START TEST ubsan 00:06:02.901 ************************************ 00:06:02.901 13:59:51 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:02.901 using ubsan 00:06:02.901 00:06:02.901 real 0m0.001s 00:06:02.901 user 0m0.000s 00:06:02.901 sys 0m0.000s 00:06:02.901 13:59:51 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:02.901 13:59:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:02.901 ************************************ 00:06:02.901 END TEST ubsan 00:06:02.901 ************************************ 00:06:03.163 13:59:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:03.163 13:59:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:03.163 13:59:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:03.163 13:59:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:03.163 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:03.163 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:03.734 Using 'verbs' RDMA provider 00:06:19.737 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:31.968 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:32.229 Creating mk/config.mk...done. 00:06:32.229 Creating mk/cc.flags.mk...done. 00:06:32.229 Type 'make' to build. 00:06:32.229 14:00:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:32.229 14:00:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:32.229 14:00:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:32.229 14:00:20 -- common/autotest_common.sh@10 -- $ set +x 00:06:32.229 ************************************ 00:06:32.229 START TEST make 00:06:32.229 ************************************ 00:06:32.229 14:00:20 make -- common/autotest_common.sh@1129 -- $ make -j144 00:06:32.801 make[1]: Nothing to be done for 'all'. 00:06:34.184 The Meson build system 00:06:34.184 Version: 1.5.0 00:06:34.184 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:34.184 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:34.184 Build type: native build 00:06:34.184 Project name: libvfio-user 00:06:34.184 Project version: 0.0.1 00:06:34.184 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:34.184 C linker for the host machine: cc ld.bfd 2.40-14 00:06:34.184 Host machine cpu family: x86_64 00:06:34.184 Host machine cpu: x86_64 00:06:34.184 Run-time dependency threads found: YES 00:06:34.184 Library dl found: YES 00:06:34.184 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:34.184 Run-time dependency json-c found: YES 0.17 00:06:34.184 Run-time dependency cmocka found: YES 1.1.7 00:06:34.184 Program pytest-3 found: NO 00:06:34.184 Program flake8 found: NO 00:06:34.184 Program misspell-fixer found: NO 00:06:34.184 Program restructuredtext-lint found: NO 00:06:34.184 Program valgrind found: YES (/usr/bin/valgrind) 00:06:34.184 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:34.184 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:34.184 Compiler for C supports arguments -Wwrite-strings: YES 00:06:34.184 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:34.184 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:34.184 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:34.184 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:34.184 Build targets in project: 8 00:06:34.184 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:34.184 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:34.184 00:06:34.184 libvfio-user 0.0.1 00:06:34.184 00:06:34.184 User defined options 00:06:34.184 buildtype : debug 00:06:34.184 default_library: shared 00:06:34.184 libdir : /usr/local/lib 00:06:34.184 00:06:34.184 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:34.752 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:34.752 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:34.752 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:34.752 [3/37] Compiling C object samples/null.p/null.c.o 00:06:34.752 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:34.752 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:34.752 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:34.752 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:34.752 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:34.752 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:34.752 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:34.752 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:34.752 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:34.752 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:34.752 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:34.752 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:34.752 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:34.752 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:34.752 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:34.752 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:34.752 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:34.752 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:34.752 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:34.752 [23/37] Compiling C object samples/server.p/server.c.o 00:06:34.752 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:34.752 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:34.752 [26/37] Compiling C object samples/client.p/client.c.o 00:06:35.013 [27/37] Linking target samples/client 00:06:35.013 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:35.013 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:35.013 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:35.013 [31/37] Linking target test/unit_tests 00:06:35.013 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:35.013 [33/37] Linking target samples/server 00:06:35.013 [34/37] Linking target samples/lspci 00:06:35.275 [35/37] Linking target samples/gpio-pci-idio-16 00:06:35.275 [36/37] Linking target samples/null 00:06:35.275 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:35.275 INFO: autodetecting backend as ninja 00:06:35.275 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:35.275 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:35.536 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:35.536 ninja: no work to do. 00:06:42.129 The Meson build system 00:06:42.129 Version: 1.5.0 00:06:42.129 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:42.129 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:42.129 Build type: native build 00:06:42.129 Program cat found: YES (/usr/bin/cat) 00:06:42.129 Project name: DPDK 00:06:42.129 Project version: 24.03.0 00:06:42.129 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:42.129 C linker for the host machine: cc ld.bfd 2.40-14 00:06:42.129 Host machine cpu family: x86_64 00:06:42.129 Host machine cpu: x86_64 00:06:42.129 Message: ## Building in Developer Mode ## 00:06:42.129 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:42.129 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:42.129 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:42.129 Program python3 found: YES (/usr/bin/python3) 00:06:42.129 Program cat found: YES (/usr/bin/cat) 00:06:42.129 Compiler for C supports arguments -march=native: YES 00:06:42.129 Checking for size of "void *" : 8 00:06:42.129 Checking for size of "void *" : 8 (cached) 00:06:42.129 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:42.129 Library m found: YES 00:06:42.129 Library numa found: YES 00:06:42.129 Has header "numaif.h" : YES 00:06:42.129 Library fdt found: NO 00:06:42.129 Library execinfo found: NO 00:06:42.129 Has header "execinfo.h" : YES 00:06:42.129 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:42.129 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:42.129 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:42.129 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:42.130 Run-time dependency openssl found: YES 3.1.1 00:06:42.130 Run-time dependency libpcap found: YES 1.10.4 00:06:42.130 Has header "pcap.h" with dependency libpcap: YES 00:06:42.130 Compiler for C supports arguments -Wcast-qual: YES 00:06:42.130 Compiler for C supports arguments -Wdeprecated: YES 00:06:42.130 Compiler for C supports arguments -Wformat: YES 00:06:42.130 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:42.130 Compiler for C supports arguments -Wformat-security: NO 00:06:42.130 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:42.130 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:42.130 Compiler for C supports arguments -Wnested-externs: YES 00:06:42.130 Compiler for C supports arguments -Wold-style-definition: YES 00:06:42.130 Compiler for C supports arguments -Wpointer-arith: YES 00:06:42.130 Compiler for C supports arguments -Wsign-compare: YES 00:06:42.130 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:42.130 Compiler for C supports arguments -Wundef: YES 00:06:42.130 Compiler for C supports arguments -Wwrite-strings: YES 00:06:42.130 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:42.130 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:42.130 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:42.130 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:42.130 Program objdump found: YES (/usr/bin/objdump) 00:06:42.130 Compiler for C supports arguments -mavx512f: YES 00:06:42.130 Checking if "AVX512 checking" compiles: YES 00:06:42.130 Fetching value of define "__SSE4_2__" : 1 00:06:42.130 Fetching value of define "__AES__" : 1 00:06:42.130 Fetching value of define "__AVX__" : 1 00:06:42.130 Fetching value of define "__AVX2__" : 1 00:06:42.130 Fetching value of define "__AVX512BW__" : 1 00:06:42.130 Fetching value of define "__AVX512CD__" : 1 00:06:42.130 Fetching value of define "__AVX512DQ__" : 1 00:06:42.130 Fetching value of define "__AVX512F__" : 1 00:06:42.130 Fetching value of define "__AVX512VL__" : 1 00:06:42.130 Fetching value of define "__PCLMUL__" : 1 00:06:42.130 Fetching value of define "__RDRND__" : 1 00:06:42.130 Fetching value of define "__RDSEED__" : 1 00:06:42.130 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:42.130 Fetching value of define "__znver1__" : (undefined) 00:06:42.130 Fetching value of define "__znver2__" : (undefined) 00:06:42.130 Fetching value of define "__znver3__" : (undefined) 00:06:42.130 Fetching value of define "__znver4__" : (undefined) 00:06:42.130 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:42.130 Message: lib/log: Defining dependency "log" 00:06:42.130 Message: lib/kvargs: Defining dependency "kvargs" 00:06:42.130 Message: lib/telemetry: Defining dependency "telemetry" 00:06:42.130 Checking for function "getentropy" : NO 00:06:42.130 Message: lib/eal: Defining dependency "eal" 00:06:42.130 Message: lib/ring: Defining dependency "ring" 00:06:42.130 Message: lib/rcu: Defining dependency "rcu" 00:06:42.130 Message: lib/mempool: Defining dependency "mempool" 00:06:42.130 Message: lib/mbuf: Defining dependency "mbuf" 00:06:42.130 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:42.130 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:42.130 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:42.130 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:42.130 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:42.130 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:42.130 Compiler for C supports arguments -mpclmul: YES 00:06:42.130 Compiler for C supports arguments -maes: YES 00:06:42.130 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:42.130 Compiler for C supports arguments -mavx512bw: YES 00:06:42.130 Compiler for C supports arguments -mavx512dq: YES 00:06:42.130 Compiler for C supports arguments -mavx512vl: YES 00:06:42.130 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:42.130 Compiler for C supports arguments -mavx2: YES 00:06:42.130 Compiler for C supports arguments -mavx: YES 00:06:42.130 Message: lib/net: Defining dependency "net" 00:06:42.130 Message: lib/meter: Defining dependency "meter" 00:06:42.130 Message: lib/ethdev: Defining dependency "ethdev" 00:06:42.130 Message: lib/pci: Defining dependency "pci" 00:06:42.130 Message: lib/cmdline: Defining dependency "cmdline" 00:06:42.130 Message: lib/hash: Defining dependency "hash" 00:06:42.130 Message: lib/timer: Defining dependency "timer" 00:06:42.130 Message: lib/compressdev: Defining dependency "compressdev" 00:06:42.130 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:42.130 Message: lib/dmadev: Defining dependency "dmadev" 00:06:42.130 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:42.130 Message: lib/power: Defining dependency "power" 00:06:42.130 Message: lib/reorder: Defining dependency "reorder" 00:06:42.130 Message: lib/security: Defining dependency "security" 00:06:42.130 Has header "linux/userfaultfd.h" : YES 00:06:42.130 Has header "linux/vduse.h" : YES 00:06:42.130 Message: lib/vhost: Defining dependency "vhost" 00:06:42.130 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:42.130 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:42.130 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:42.130 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:42.130 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:42.130 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:42.130 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:42.130 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:42.130 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:42.130 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:42.130 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:42.130 Configuring doxy-api-html.conf using configuration 00:06:42.130 Configuring doxy-api-man.conf using configuration 00:06:42.130 Program mandb found: YES (/usr/bin/mandb) 00:06:42.130 Program sphinx-build found: NO 00:06:42.130 Configuring rte_build_config.h using configuration 00:06:42.130 Message: 00:06:42.130 ================= 00:06:42.130 Applications Enabled 00:06:42.130 ================= 00:06:42.130 00:06:42.130 apps: 00:06:42.130 00:06:42.130 00:06:42.130 Message: 00:06:42.130 ================= 00:06:42.130 Libraries Enabled 00:06:42.130 ================= 00:06:42.130 00:06:42.130 libs: 00:06:42.130 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:42.130 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:42.130 cryptodev, dmadev, power, reorder, security, vhost, 00:06:42.130 00:06:42.130 Message: 00:06:42.130 =============== 00:06:42.130 Drivers Enabled 00:06:42.130 =============== 00:06:42.130 00:06:42.130 common: 00:06:42.130 00:06:42.130 bus: 00:06:42.130 pci, vdev, 00:06:42.130 mempool: 00:06:42.130 ring, 00:06:42.130 dma: 00:06:42.130 00:06:42.130 net: 00:06:42.130 00:06:42.130 crypto: 00:06:42.130 00:06:42.130 compress: 00:06:42.130 00:06:42.130 vdpa: 00:06:42.130 00:06:42.130 00:06:42.130 Message: 00:06:42.130 ================= 00:06:42.130 Content Skipped 00:06:42.130 ================= 00:06:42.130 00:06:42.130 apps: 00:06:42.130 dumpcap: explicitly disabled via build config 00:06:42.130 graph: explicitly disabled via build config 00:06:42.130 pdump: explicitly disabled via build config 00:06:42.130 proc-info: explicitly disabled via build config 00:06:42.130 test-acl: explicitly disabled via build config 00:06:42.130 test-bbdev: explicitly disabled via build config 00:06:42.130 test-cmdline: explicitly disabled via build config 00:06:42.130 test-compress-perf: explicitly disabled via build config 00:06:42.130 test-crypto-perf: explicitly disabled via build config 00:06:42.130 test-dma-perf: explicitly disabled via build config 00:06:42.130 test-eventdev: explicitly disabled via build config 00:06:42.130 test-fib: explicitly disabled via build config 00:06:42.130 test-flow-perf: explicitly disabled via build config 00:06:42.130 test-gpudev: explicitly disabled via build config 00:06:42.130 test-mldev: explicitly disabled via build config 00:06:42.130 test-pipeline: explicitly disabled via build config 00:06:42.130 test-pmd: explicitly disabled via build config 00:06:42.130 test-regex: explicitly disabled via build config 00:06:42.130 test-sad: explicitly disabled via build config 00:06:42.130 test-security-perf: explicitly disabled via build config 00:06:42.130 00:06:42.130 libs: 00:06:42.130 argparse: explicitly disabled via build config 00:06:42.130 metrics: explicitly disabled via build config 00:06:42.130 acl: explicitly disabled via build config 00:06:42.130 bbdev: explicitly disabled via build config 00:06:42.130 bitratestats: explicitly disabled via build config 00:06:42.130 bpf: explicitly disabled via build config 00:06:42.130 cfgfile: explicitly disabled via build config 00:06:42.130 distributor: explicitly disabled via build config 00:06:42.130 efd: explicitly disabled via build config 00:06:42.130 eventdev: explicitly disabled via build config 00:06:42.130 dispatcher: explicitly disabled via build config 00:06:42.130 gpudev: explicitly disabled via build config 00:06:42.130 gro: explicitly disabled via build config 00:06:42.130 gso: explicitly disabled via build config 00:06:42.130 ip_frag: explicitly disabled via build config 00:06:42.130 jobstats: explicitly disabled via build config 00:06:42.130 latencystats: explicitly disabled via build config 00:06:42.130 lpm: explicitly disabled via build config 00:06:42.130 member: explicitly disabled via build config 00:06:42.130 pcapng: explicitly disabled via build config 00:06:42.130 rawdev: explicitly disabled via build config 00:06:42.130 regexdev: explicitly disabled via build config 00:06:42.130 mldev: explicitly disabled via build config 00:06:42.130 rib: explicitly disabled via build config 00:06:42.130 sched: explicitly disabled via build config 00:06:42.130 stack: explicitly disabled via build config 00:06:42.130 ipsec: explicitly disabled via build config 00:06:42.130 pdcp: explicitly disabled via build config 00:06:42.130 fib: explicitly disabled via build config 00:06:42.131 port: explicitly disabled via build config 00:06:42.131 pdump: explicitly disabled via build config 00:06:42.131 table: explicitly disabled via build config 00:06:42.131 pipeline: explicitly disabled via build config 00:06:42.131 graph: explicitly disabled via build config 00:06:42.131 node: explicitly disabled via build config 00:06:42.131 00:06:42.131 drivers: 00:06:42.131 common/cpt: not in enabled drivers build config 00:06:42.131 common/dpaax: not in enabled drivers build config 00:06:42.131 common/iavf: not in enabled drivers build config 00:06:42.131 common/idpf: not in enabled drivers build config 00:06:42.131 common/ionic: not in enabled drivers build config 00:06:42.131 common/mvep: not in enabled drivers build config 00:06:42.131 common/octeontx: not in enabled drivers build config 00:06:42.131 bus/auxiliary: not in enabled drivers build config 00:06:42.131 bus/cdx: not in enabled drivers build config 00:06:42.131 bus/dpaa: not in enabled drivers build config 00:06:42.131 bus/fslmc: not in enabled drivers build config 00:06:42.131 bus/ifpga: not in enabled drivers build config 00:06:42.131 bus/platform: not in enabled drivers build config 00:06:42.131 bus/uacce: not in enabled drivers build config 00:06:42.131 bus/vmbus: not in enabled drivers build config 00:06:42.131 common/cnxk: not in enabled drivers build config 00:06:42.131 common/mlx5: not in enabled drivers build config 00:06:42.131 common/nfp: not in enabled drivers build config 00:06:42.131 common/nitrox: not in enabled drivers build config 00:06:42.131 common/qat: not in enabled drivers build config 00:06:42.131 common/sfc_efx: not in enabled drivers build config 00:06:42.131 mempool/bucket: not in enabled drivers build config 00:06:42.131 mempool/cnxk: not in enabled drivers build config 00:06:42.131 mempool/dpaa: not in enabled drivers build config 00:06:42.131 mempool/dpaa2: not in enabled drivers build config 00:06:42.131 mempool/octeontx: not in enabled drivers build config 00:06:42.131 mempool/stack: not in enabled drivers build config 00:06:42.131 dma/cnxk: not in enabled drivers build config 00:06:42.131 dma/dpaa: not in enabled drivers build config 00:06:42.131 dma/dpaa2: not in enabled drivers build config 00:06:42.131 dma/hisilicon: not in enabled drivers build config 00:06:42.131 dma/idxd: not in enabled drivers build config 00:06:42.131 dma/ioat: not in enabled drivers build config 00:06:42.131 dma/skeleton: not in enabled drivers build config 00:06:42.131 net/af_packet: not in enabled drivers build config 00:06:42.131 net/af_xdp: not in enabled drivers build config 00:06:42.131 net/ark: not in enabled drivers build config 00:06:42.131 net/atlantic: not in enabled drivers build config 00:06:42.131 net/avp: not in enabled drivers build config 00:06:42.131 net/axgbe: not in enabled drivers build config 00:06:42.131 net/bnx2x: not in enabled drivers build config 00:06:42.131 net/bnxt: not in enabled drivers build config 00:06:42.131 net/bonding: not in enabled drivers build config 00:06:42.131 net/cnxk: not in enabled drivers build config 00:06:42.131 net/cpfl: not in enabled drivers build config 00:06:42.131 net/cxgbe: not in enabled drivers build config 00:06:42.131 net/dpaa: not in enabled drivers build config 00:06:42.131 net/dpaa2: not in enabled drivers build config 00:06:42.131 net/e1000: not in enabled drivers build config 00:06:42.131 net/ena: not in enabled drivers build config 00:06:42.131 net/enetc: not in enabled drivers build config 00:06:42.131 net/enetfec: not in enabled drivers build config 00:06:42.131 net/enic: not in enabled drivers build config 00:06:42.131 net/failsafe: not in enabled drivers build config 00:06:42.131 net/fm10k: not in enabled drivers build config 00:06:42.131 net/gve: not in enabled drivers build config 00:06:42.131 net/hinic: not in enabled drivers build config 00:06:42.131 net/hns3: not in enabled drivers build config 00:06:42.131 net/i40e: not in enabled drivers build config 00:06:42.131 net/iavf: not in enabled drivers build config 00:06:42.131 net/ice: not in enabled drivers build config 00:06:42.131 net/idpf: not in enabled drivers build config 00:06:42.131 net/igc: not in enabled drivers build config 00:06:42.131 net/ionic: not in enabled drivers build config 00:06:42.131 net/ipn3ke: not in enabled drivers build config 00:06:42.131 net/ixgbe: not in enabled drivers build config 00:06:42.131 net/mana: not in enabled drivers build config 00:06:42.131 net/memif: not in enabled drivers build config 00:06:42.131 net/mlx4: not in enabled drivers build config 00:06:42.131 net/mlx5: not in enabled drivers build config 00:06:42.131 net/mvneta: not in enabled drivers build config 00:06:42.131 net/mvpp2: not in enabled drivers build config 00:06:42.131 net/netvsc: not in enabled drivers build config 00:06:42.131 net/nfb: not in enabled drivers build config 00:06:42.131 net/nfp: not in enabled drivers build config 00:06:42.131 net/ngbe: not in enabled drivers build config 00:06:42.131 net/null: not in enabled drivers build config 00:06:42.131 net/octeontx: not in enabled drivers build config 00:06:42.131 net/octeon_ep: not in enabled drivers build config 00:06:42.131 net/pcap: not in enabled drivers build config 00:06:42.131 net/pfe: not in enabled drivers build config 00:06:42.131 net/qede: not in enabled drivers build config 00:06:42.131 net/ring: not in enabled drivers build config 00:06:42.131 net/sfc: not in enabled drivers build config 00:06:42.131 net/softnic: not in enabled drivers build config 00:06:42.131 net/tap: not in enabled drivers build config 00:06:42.131 net/thunderx: not in enabled drivers build config 00:06:42.131 net/txgbe: not in enabled drivers build config 00:06:42.131 net/vdev_netvsc: not in enabled drivers build config 00:06:42.131 net/vhost: not in enabled drivers build config 00:06:42.131 net/virtio: not in enabled drivers build config 00:06:42.131 net/vmxnet3: not in enabled drivers build config 00:06:42.131 raw/*: missing internal dependency, "rawdev" 00:06:42.131 crypto/armv8: not in enabled drivers build config 00:06:42.131 crypto/bcmfs: not in enabled drivers build config 00:06:42.131 crypto/caam_jr: not in enabled drivers build config 00:06:42.131 crypto/ccp: not in enabled drivers build config 00:06:42.131 crypto/cnxk: not in enabled drivers build config 00:06:42.131 crypto/dpaa_sec: not in enabled drivers build config 00:06:42.131 crypto/dpaa2_sec: not in enabled drivers build config 00:06:42.131 crypto/ipsec_mb: not in enabled drivers build config 00:06:42.131 crypto/mlx5: not in enabled drivers build config 00:06:42.131 crypto/mvsam: not in enabled drivers build config 00:06:42.131 crypto/nitrox: not in enabled drivers build config 00:06:42.131 crypto/null: not in enabled drivers build config 00:06:42.131 crypto/octeontx: not in enabled drivers build config 00:06:42.131 crypto/openssl: not in enabled drivers build config 00:06:42.131 crypto/scheduler: not in enabled drivers build config 00:06:42.131 crypto/uadk: not in enabled drivers build config 00:06:42.131 crypto/virtio: not in enabled drivers build config 00:06:42.131 compress/isal: not in enabled drivers build config 00:06:42.131 compress/mlx5: not in enabled drivers build config 00:06:42.131 compress/nitrox: not in enabled drivers build config 00:06:42.131 compress/octeontx: not in enabled drivers build config 00:06:42.131 compress/zlib: not in enabled drivers build config 00:06:42.131 regex/*: missing internal dependency, "regexdev" 00:06:42.131 ml/*: missing internal dependency, "mldev" 00:06:42.131 vdpa/ifc: not in enabled drivers build config 00:06:42.131 vdpa/mlx5: not in enabled drivers build config 00:06:42.131 vdpa/nfp: not in enabled drivers build config 00:06:42.131 vdpa/sfc: not in enabled drivers build config 00:06:42.131 event/*: missing internal dependency, "eventdev" 00:06:42.131 baseband/*: missing internal dependency, "bbdev" 00:06:42.131 gpu/*: missing internal dependency, "gpudev" 00:06:42.131 00:06:42.131 00:06:42.131 Build targets in project: 84 00:06:42.131 00:06:42.131 DPDK 24.03.0 00:06:42.131 00:06:42.131 User defined options 00:06:42.131 buildtype : debug 00:06:42.131 default_library : shared 00:06:42.131 libdir : lib 00:06:42.131 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:42.131 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:42.131 c_link_args : 00:06:42.131 cpu_instruction_set: native 00:06:42.131 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:06:42.131 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:06:42.131 enable_docs : false 00:06:42.131 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:42.131 enable_kmods : false 00:06:42.131 max_lcores : 128 00:06:42.131 tests : false 00:06:42.131 00:06:42.131 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:42.131 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:42.131 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:42.131 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:42.131 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:42.131 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:42.131 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:42.131 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:42.131 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:42.131 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:42.131 [9/267] Linking static target lib/librte_kvargs.a 00:06:42.131 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:42.131 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:42.131 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:42.131 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:42.131 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:42.131 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:42.131 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:42.131 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:42.131 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:42.132 [19/267] Linking static target lib/librte_log.a 00:06:42.132 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:42.132 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:42.132 [22/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:42.132 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:42.391 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:42.391 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:42.391 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:42.391 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:42.391 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:42.391 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:42.391 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:42.391 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:42.391 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:42.391 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:42.391 [34/267] Linking static target lib/librte_pci.a 00:06:42.391 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:42.391 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:42.391 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:42.391 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:42.649 [39/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:42.649 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.649 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:42.649 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:42.649 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:42.649 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:42.649 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.649 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:42.650 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:42.650 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:42.650 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:42.650 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:42.650 [51/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:42.650 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:42.650 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:42.650 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:42.650 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:42.650 [56/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:42.650 [57/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:42.650 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:42.650 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:42.650 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:42.650 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:42.650 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:42.650 [63/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:42.650 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:42.650 [65/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:42.650 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:42.650 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:42.650 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:42.650 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:42.650 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:42.650 [71/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:42.650 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:42.650 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:42.650 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:42.650 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:42.650 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:42.650 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:42.650 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:42.650 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:42.650 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:42.650 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:42.650 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:42.650 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:42.650 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:42.650 [85/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:42.650 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:42.650 [87/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:42.650 [88/267] Linking static target lib/librte_telemetry.a 00:06:42.650 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:42.650 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:42.650 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:42.650 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:42.650 [93/267] Linking static target lib/librte_ring.a 00:06:42.650 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:42.650 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:42.650 [96/267] Linking static target lib/librte_meter.a 00:06:42.650 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:42.650 [98/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:42.650 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:42.650 [100/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:42.650 [101/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:42.650 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:42.650 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:42.650 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:42.650 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:42.650 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:42.650 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:42.650 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:42.650 [109/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:42.650 [110/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:42.650 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:42.650 [112/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:42.650 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:42.650 [114/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:42.650 [115/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:42.650 [116/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:42.650 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:42.650 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:42.650 [119/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:42.650 [120/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:42.650 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:42.650 [122/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:42.650 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:42.650 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:42.650 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:42.650 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:42.650 [127/267] Linking static target lib/librte_timer.a 00:06:42.650 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:42.910 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:42.910 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:42.910 [131/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:42.910 [132/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:42.910 [133/267] Linking static target lib/librte_cmdline.a 00:06:42.910 [134/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:42.910 [135/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:42.910 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:42.910 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:42.910 [138/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:42.910 [139/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:42.910 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:42.910 [141/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:42.910 [142/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:42.910 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:42.910 [144/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:42.910 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:42.910 [146/267] Linking static target lib/librte_dmadev.a 00:06:42.910 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:42.910 [148/267] Linking static target lib/librte_mempool.a 00:06:42.910 [149/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:42.910 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.910 [151/267] Linking static target lib/librte_compressdev.a 00:06:42.910 [152/267] Linking static target lib/librte_net.a 00:06:42.910 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:42.910 [154/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:42.910 [155/267] Linking static target lib/librte_rcu.a 00:06:42.910 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:42.910 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:42.910 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:42.910 [159/267] Linking target lib/librte_log.so.24.1 00:06:42.910 [160/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:42.910 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:42.910 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:42.910 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:42.910 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:42.910 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:42.910 [166/267] Linking static target lib/librte_power.a 00:06:42.910 [167/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:42.910 [168/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:42.910 [169/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:42.910 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:42.910 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:42.910 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:42.910 [173/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:42.910 [174/267] Linking static target lib/librte_reorder.a 00:06:42.910 [175/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:42.910 [176/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:42.910 [177/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:42.910 [178/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:42.910 [179/267] Linking static target drivers/librte_bus_vdev.a 00:06:42.910 [180/267] Linking static target lib/librte_eal.a 00:06:42.910 [181/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:42.910 [182/267] Linking static target lib/librte_mbuf.a 00:06:42.910 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.910 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:42.910 [185/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:42.910 [186/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:42.910 [187/267] Linking static target lib/librte_security.a 00:06:42.910 [188/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:42.910 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:43.169 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:43.169 [191/267] Linking static target lib/librte_hash.a 00:06:43.169 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.169 [193/267] Linking target lib/librte_kvargs.so.24.1 00:06:43.169 [194/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:43.169 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:43.169 [196/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:43.169 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:43.169 [198/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:43.169 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:43.169 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:43.169 [201/267] Linking static target drivers/librte_mempool_ring.a 00:06:43.169 [202/267] Linking static target drivers/librte_bus_pci.a 00:06:43.169 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:43.169 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:43.169 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:43.169 [206/267] Linking static target lib/librte_cryptodev.a 00:06:43.169 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.169 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.429 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.429 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.429 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.429 [212/267] Linking target lib/librte_telemetry.so.24.1 00:06:43.429 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.429 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:43.691 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.691 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.691 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:43.691 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.691 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:43.691 [220/267] Linking static target lib/librte_ethdev.a 00:06:43.951 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.951 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.951 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:43.951 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.211 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.211 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:44.781 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:44.781 [228/267] Linking static target lib/librte_vhost.a 00:06:45.352 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:46.742 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:53.450 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.436 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:54.697 [233/267] Linking target lib/librte_eal.so.24.1 00:06:54.697 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:54.697 [235/267] Linking target lib/librte_timer.so.24.1 00:06:54.697 [236/267] Linking target lib/librte_ring.so.24.1 00:06:54.697 [237/267] Linking target lib/librte_meter.so.24.1 00:06:54.697 [238/267] Linking target lib/librte_pci.so.24.1 00:06:54.697 [239/267] Linking target lib/librte_dmadev.so.24.1 00:06:54.697 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:54.957 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:54.957 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:54.957 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:54.957 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:54.957 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:54.957 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:54.957 [247/267] Linking target lib/librte_mempool.so.24.1 00:06:54.957 [248/267] Linking target lib/librte_rcu.so.24.1 00:06:54.957 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:54.957 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:55.217 [251/267] Linking target lib/librte_mbuf.so.24.1 00:06:55.217 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:55.217 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:55.217 [254/267] Linking target lib/librte_net.so.24.1 00:06:55.217 [255/267] Linking target lib/librte_compressdev.so.24.1 00:06:55.217 [256/267] Linking target lib/librte_reorder.so.24.1 00:06:55.217 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:06:55.478 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:55.478 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:55.478 [260/267] Linking target lib/librte_hash.so.24.1 00:06:55.478 [261/267] Linking target lib/librte_cmdline.so.24.1 00:06:55.478 [262/267] Linking target lib/librte_security.so.24.1 00:06:55.478 [263/267] Linking target lib/librte_ethdev.so.24.1 00:06:55.738 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:55.738 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:55.738 [266/267] Linking target lib/librte_power.so.24.1 00:06:55.738 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:55.738 INFO: autodetecting backend as ninja 00:06:55.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:59.940 CC lib/log/log.o 00:06:59.940 CC lib/log/log_flags.o 00:06:59.940 CC lib/log/log_deprecated.o 00:06:59.940 CC lib/ut_mock/mock.o 00:06:59.940 CC lib/ut/ut.o 00:06:59.940 LIB libspdk_log.a 00:06:59.940 LIB libspdk_ut.a 00:06:59.940 LIB libspdk_ut_mock.a 00:06:59.940 SO libspdk_log.so.7.1 00:06:59.940 SO libspdk_ut.so.2.0 00:06:59.940 SO libspdk_ut_mock.so.6.0 00:06:59.940 SYMLINK libspdk_log.so 00:06:59.940 SYMLINK libspdk_ut.so 00:06:59.940 SYMLINK libspdk_ut_mock.so 00:06:59.940 CC lib/ioat/ioat.o 00:07:00.202 CC lib/dma/dma.o 00:07:00.202 CXX lib/trace_parser/trace.o 00:07:00.202 CC lib/util/base64.o 00:07:00.202 CC lib/util/bit_array.o 00:07:00.202 CC lib/util/cpuset.o 00:07:00.202 CC lib/util/crc16.o 00:07:00.202 CC lib/util/crc32.o 00:07:00.202 CC lib/util/crc32c.o 00:07:00.202 CC lib/util/crc32_ieee.o 00:07:00.202 CC lib/util/crc64.o 00:07:00.202 CC lib/util/dif.o 00:07:00.202 CC lib/util/fd.o 00:07:00.202 CC lib/util/file.o 00:07:00.202 CC lib/util/fd_group.o 00:07:00.202 CC lib/util/hexlify.o 00:07:00.202 CC lib/util/iov.o 00:07:00.202 CC lib/util/math.o 00:07:00.202 CC lib/util/net.o 00:07:00.202 CC lib/util/pipe.o 00:07:00.202 CC lib/util/strerror_tls.o 00:07:00.202 CC lib/util/string.o 00:07:00.202 CC lib/util/uuid.o 00:07:00.202 CC lib/util/xor.o 00:07:00.202 CC lib/util/zipf.o 00:07:00.202 CC lib/util/md5.o 00:07:00.202 CC lib/vfio_user/host/vfio_user_pci.o 00:07:00.202 CC lib/vfio_user/host/vfio_user.o 00:07:00.202 LIB libspdk_dma.a 00:07:00.464 SO libspdk_dma.so.5.0 00:07:00.464 LIB libspdk_ioat.a 00:07:00.464 SO libspdk_ioat.so.7.0 00:07:00.464 SYMLINK libspdk_dma.so 00:07:00.464 SYMLINK libspdk_ioat.so 00:07:00.464 LIB libspdk_vfio_user.a 00:07:00.464 SO libspdk_vfio_user.so.5.0 00:07:00.725 LIB libspdk_util.a 00:07:00.725 SYMLINK libspdk_vfio_user.so 00:07:00.725 SO libspdk_util.so.10.1 00:07:00.725 SYMLINK libspdk_util.so 00:07:00.986 LIB libspdk_trace_parser.a 00:07:00.986 SO libspdk_trace_parser.so.6.0 00:07:00.986 SYMLINK libspdk_trace_parser.so 00:07:01.247 CC lib/json/json_parse.o 00:07:01.247 CC lib/vmd/vmd.o 00:07:01.248 CC lib/json/json_util.o 00:07:01.248 CC lib/json/json_write.o 00:07:01.248 CC lib/vmd/led.o 00:07:01.248 CC lib/rdma_utils/rdma_utils.o 00:07:01.248 CC lib/conf/conf.o 00:07:01.248 CC lib/env_dpdk/env.o 00:07:01.248 CC lib/env_dpdk/memory.o 00:07:01.248 CC lib/env_dpdk/pci.o 00:07:01.248 CC lib/env_dpdk/init.o 00:07:01.248 CC lib/env_dpdk/threads.o 00:07:01.248 CC lib/idxd/idxd.o 00:07:01.248 CC lib/env_dpdk/pci_ioat.o 00:07:01.248 CC lib/idxd/idxd_user.o 00:07:01.248 CC lib/env_dpdk/pci_virtio.o 00:07:01.248 CC lib/idxd/idxd_kernel.o 00:07:01.248 CC lib/env_dpdk/pci_vmd.o 00:07:01.248 CC lib/env_dpdk/pci_idxd.o 00:07:01.248 CC lib/env_dpdk/pci_event.o 00:07:01.248 CC lib/env_dpdk/sigbus_handler.o 00:07:01.248 CC lib/env_dpdk/pci_dpdk.o 00:07:01.248 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:01.248 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:01.509 LIB libspdk_conf.a 00:07:01.509 SO libspdk_conf.so.6.0 00:07:01.509 LIB libspdk_json.a 00:07:01.509 LIB libspdk_rdma_utils.a 00:07:01.509 SYMLINK libspdk_conf.so 00:07:01.509 SO libspdk_json.so.6.0 00:07:01.509 SO libspdk_rdma_utils.so.1.0 00:07:01.509 SYMLINK libspdk_json.so 00:07:01.509 SYMLINK libspdk_rdma_utils.so 00:07:01.769 LIB libspdk_idxd.a 00:07:01.769 LIB libspdk_vmd.a 00:07:01.769 SO libspdk_idxd.so.12.1 00:07:01.769 SO libspdk_vmd.so.6.0 00:07:01.769 SYMLINK libspdk_idxd.so 00:07:01.769 SYMLINK libspdk_vmd.so 00:07:02.030 CC lib/jsonrpc/jsonrpc_server.o 00:07:02.030 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:02.030 CC lib/jsonrpc/jsonrpc_client.o 00:07:02.030 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:02.030 CC lib/rdma_provider/common.o 00:07:02.030 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:02.030 LIB libspdk_env_dpdk.a 00:07:02.030 LIB libspdk_jsonrpc.a 00:07:02.291 LIB libspdk_rdma_provider.a 00:07:02.291 SO libspdk_env_dpdk.so.15.1 00:07:02.291 SO libspdk_jsonrpc.so.6.0 00:07:02.291 SO libspdk_rdma_provider.so.7.0 00:07:02.291 SYMLINK libspdk_jsonrpc.so 00:07:02.291 SYMLINK libspdk_rdma_provider.so 00:07:02.291 SYMLINK libspdk_env_dpdk.so 00:07:02.553 CC lib/rpc/rpc.o 00:07:02.815 LIB libspdk_rpc.a 00:07:02.815 SO libspdk_rpc.so.6.0 00:07:03.076 SYMLINK libspdk_rpc.so 00:07:03.336 CC lib/keyring/keyring_rpc.o 00:07:03.336 CC lib/trace/trace.o 00:07:03.336 CC lib/keyring/keyring.o 00:07:03.336 CC lib/trace/trace_flags.o 00:07:03.336 CC lib/trace/trace_rpc.o 00:07:03.336 CC lib/notify/notify.o 00:07:03.336 CC lib/notify/notify_rpc.o 00:07:03.596 LIB libspdk_notify.a 00:07:03.596 SO libspdk_notify.so.6.0 00:07:03.596 LIB libspdk_keyring.a 00:07:03.596 LIB libspdk_trace.a 00:07:03.596 SO libspdk_keyring.so.2.0 00:07:03.596 SYMLINK libspdk_notify.so 00:07:03.596 SO libspdk_trace.so.11.0 00:07:03.596 SYMLINK libspdk_keyring.so 00:07:03.596 SYMLINK libspdk_trace.so 00:07:04.166 CC lib/thread/thread.o 00:07:04.166 CC lib/thread/iobuf.o 00:07:04.166 CC lib/sock/sock.o 00:07:04.166 CC lib/sock/sock_rpc.o 00:07:04.426 LIB libspdk_sock.a 00:07:04.426 SO libspdk_sock.so.10.0 00:07:04.686 SYMLINK libspdk_sock.so 00:07:04.946 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:04.946 CC lib/nvme/nvme_ctrlr.o 00:07:04.946 CC lib/nvme/nvme_fabric.o 00:07:04.946 CC lib/nvme/nvme_ns_cmd.o 00:07:04.946 CC lib/nvme/nvme_ns.o 00:07:04.946 CC lib/nvme/nvme_pcie_common.o 00:07:04.946 CC lib/nvme/nvme_pcie.o 00:07:04.946 CC lib/nvme/nvme_qpair.o 00:07:04.946 CC lib/nvme/nvme.o 00:07:04.946 CC lib/nvme/nvme_quirks.o 00:07:04.946 CC lib/nvme/nvme_transport.o 00:07:04.946 CC lib/nvme/nvme_discovery.o 00:07:04.946 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:04.946 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:04.946 CC lib/nvme/nvme_tcp.o 00:07:04.946 CC lib/nvme/nvme_opal.o 00:07:04.946 CC lib/nvme/nvme_io_msg.o 00:07:04.946 CC lib/nvme/nvme_poll_group.o 00:07:04.946 CC lib/nvme/nvme_zns.o 00:07:04.946 CC lib/nvme/nvme_stubs.o 00:07:04.946 CC lib/nvme/nvme_auth.o 00:07:04.946 CC lib/nvme/nvme_cuse.o 00:07:04.946 CC lib/nvme/nvme_vfio_user.o 00:07:04.946 CC lib/nvme/nvme_rdma.o 00:07:05.514 LIB libspdk_thread.a 00:07:05.514 SO libspdk_thread.so.11.0 00:07:05.514 SYMLINK libspdk_thread.so 00:07:05.775 CC lib/accel/accel.o 00:07:05.775 CC lib/accel/accel_rpc.o 00:07:05.775 CC lib/accel/accel_sw.o 00:07:06.035 CC lib/blob/blobstore.o 00:07:06.035 CC lib/blob/request.o 00:07:06.035 CC lib/virtio/virtio.o 00:07:06.035 CC lib/blob/zeroes.o 00:07:06.035 CC lib/virtio/virtio_vhost_user.o 00:07:06.035 CC lib/blob/blob_bs_dev.o 00:07:06.035 CC lib/fsdev/fsdev.o 00:07:06.035 CC lib/virtio/virtio_vfio_user.o 00:07:06.035 CC lib/fsdev/fsdev_io.o 00:07:06.035 CC lib/virtio/virtio_pci.o 00:07:06.035 CC lib/fsdev/fsdev_rpc.o 00:07:06.035 CC lib/vfu_tgt/tgt_endpoint.o 00:07:06.035 CC lib/vfu_tgt/tgt_rpc.o 00:07:06.035 CC lib/init/json_config.o 00:07:06.035 CC lib/init/subsystem.o 00:07:06.035 CC lib/init/subsystem_rpc.o 00:07:06.035 CC lib/init/rpc.o 00:07:06.296 LIB libspdk_init.a 00:07:06.296 SO libspdk_init.so.6.0 00:07:06.296 LIB libspdk_vfu_tgt.a 00:07:06.296 LIB libspdk_virtio.a 00:07:06.296 SO libspdk_vfu_tgt.so.3.0 00:07:06.296 SYMLINK libspdk_init.so 00:07:06.296 SO libspdk_virtio.so.7.0 00:07:06.296 SYMLINK libspdk_vfu_tgt.so 00:07:06.296 SYMLINK libspdk_virtio.so 00:07:06.556 LIB libspdk_fsdev.a 00:07:06.556 SO libspdk_fsdev.so.2.0 00:07:06.556 CC lib/event/app.o 00:07:06.556 CC lib/event/reactor.o 00:07:06.556 CC lib/event/log_rpc.o 00:07:06.556 CC lib/event/app_rpc.o 00:07:06.556 CC lib/event/scheduler_static.o 00:07:06.556 SYMLINK libspdk_fsdev.so 00:07:06.815 LIB libspdk_accel.a 00:07:06.816 LIB libspdk_nvme.a 00:07:06.816 SO libspdk_accel.so.16.0 00:07:07.075 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:07.075 SYMLINK libspdk_accel.so 00:07:07.075 SO libspdk_nvme.so.15.0 00:07:07.075 LIB libspdk_event.a 00:07:07.075 SO libspdk_event.so.14.0 00:07:07.335 SYMLINK libspdk_event.so 00:07:07.335 SYMLINK libspdk_nvme.so 00:07:07.335 CC lib/bdev/bdev.o 00:07:07.335 CC lib/bdev/bdev_rpc.o 00:07:07.335 CC lib/bdev/bdev_zone.o 00:07:07.335 CC lib/bdev/part.o 00:07:07.335 CC lib/bdev/scsi_nvme.o 00:07:07.594 LIB libspdk_fuse_dispatcher.a 00:07:07.594 SO libspdk_fuse_dispatcher.so.1.0 00:07:07.853 SYMLINK libspdk_fuse_dispatcher.so 00:07:08.795 LIB libspdk_blob.a 00:07:08.795 SO libspdk_blob.so.12.0 00:07:08.795 SYMLINK libspdk_blob.so 00:07:09.056 CC lib/blobfs/blobfs.o 00:07:09.056 CC lib/blobfs/tree.o 00:07:09.056 CC lib/lvol/lvol.o 00:07:09.998 LIB libspdk_bdev.a 00:07:09.998 LIB libspdk_blobfs.a 00:07:09.998 SO libspdk_bdev.so.17.0 00:07:09.998 SO libspdk_blobfs.so.11.0 00:07:09.998 LIB libspdk_lvol.a 00:07:09.998 SO libspdk_lvol.so.11.0 00:07:09.998 SYMLINK libspdk_blobfs.so 00:07:09.998 SYMLINK libspdk_bdev.so 00:07:09.998 SYMLINK libspdk_lvol.so 00:07:10.261 CC lib/ftl/ftl_core.o 00:07:10.261 CC lib/ftl/ftl_init.o 00:07:10.261 CC lib/ftl/ftl_layout.o 00:07:10.261 CC lib/ftl/ftl_debug.o 00:07:10.261 CC lib/ftl/ftl_io.o 00:07:10.261 CC lib/ftl/ftl_sb.o 00:07:10.261 CC lib/ftl/ftl_l2p.o 00:07:10.261 CC lib/ftl/ftl_l2p_flat.o 00:07:10.261 CC lib/ftl/ftl_nv_cache.o 00:07:10.261 CC lib/nvmf/ctrlr.o 00:07:10.261 CC lib/ftl/ftl_band.o 00:07:10.261 CC lib/nvmf/ctrlr_discovery.o 00:07:10.261 CC lib/ublk/ublk.o 00:07:10.261 CC lib/ftl/ftl_band_ops.o 00:07:10.261 CC lib/scsi/dev.o 00:07:10.261 CC lib/nbd/nbd.o 00:07:10.261 CC lib/ftl/ftl_writer.o 00:07:10.261 CC lib/nvmf/ctrlr_bdev.o 00:07:10.261 CC lib/ublk/ublk_rpc.o 00:07:10.261 CC lib/scsi/lun.o 00:07:10.261 CC lib/nbd/nbd_rpc.o 00:07:10.261 CC lib/ftl/ftl_rq.o 00:07:10.261 CC lib/nvmf/subsystem.o 00:07:10.261 CC lib/ftl/ftl_reloc.o 00:07:10.261 CC lib/scsi/port.o 00:07:10.261 CC lib/scsi/scsi.o 00:07:10.261 CC lib/ftl/ftl_l2p_cache.o 00:07:10.261 CC lib/nvmf/nvmf.o 00:07:10.261 CC lib/ftl/ftl_p2l.o 00:07:10.261 CC lib/scsi/scsi_bdev.o 00:07:10.261 CC lib/ftl/ftl_p2l_log.o 00:07:10.261 CC lib/nvmf/nvmf_rpc.o 00:07:10.261 CC lib/nvmf/transport.o 00:07:10.261 CC lib/scsi/scsi_pr.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt.o 00:07:10.261 CC lib/nvmf/tcp.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:10.261 CC lib/scsi/scsi_rpc.o 00:07:10.261 CC lib/scsi/task.o 00:07:10.261 CC lib/nvmf/stubs.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:10.261 CC lib/nvmf/mdns_server.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:10.261 CC lib/nvmf/vfio_user.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:10.261 CC lib/nvmf/rdma.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:10.261 CC lib/nvmf/auth.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:10.261 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:10.261 CC lib/ftl/utils/ftl_conf.o 00:07:10.261 CC lib/ftl/utils/ftl_md.o 00:07:10.261 CC lib/ftl/utils/ftl_mempool.o 00:07:10.261 CC lib/ftl/utils/ftl_bitmap.o 00:07:10.261 CC lib/ftl/utils/ftl_property.o 00:07:10.261 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:10.261 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:10.261 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:10.261 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:10.261 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:10.522 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:10.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:10.522 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:10.522 CC lib/ftl/base/ftl_base_dev.o 00:07:10.522 CC lib/ftl/base/ftl_base_bdev.o 00:07:10.522 CC lib/ftl/ftl_trace.o 00:07:11.091 LIB libspdk_scsi.a 00:07:11.091 LIB libspdk_nbd.a 00:07:11.091 SO libspdk_scsi.so.9.0 00:07:11.091 SO libspdk_nbd.so.7.0 00:07:11.351 LIB libspdk_ublk.a 00:07:11.351 SYMLINK libspdk_nbd.so 00:07:11.351 SYMLINK libspdk_scsi.so 00:07:11.351 SO libspdk_ublk.so.3.0 00:07:11.351 SYMLINK libspdk_ublk.so 00:07:11.611 LIB libspdk_ftl.a 00:07:11.611 CC lib/vhost/vhost.o 00:07:11.611 CC lib/vhost/vhost_rpc.o 00:07:11.611 CC lib/vhost/vhost_scsi.o 00:07:11.611 CC lib/vhost/vhost_blk.o 00:07:11.611 CC lib/vhost/rte_vhost_user.o 00:07:11.611 CC lib/iscsi/conn.o 00:07:11.611 CC lib/iscsi/init_grp.o 00:07:11.611 CC lib/iscsi/iscsi.o 00:07:11.611 CC lib/iscsi/param.o 00:07:11.611 CC lib/iscsi/portal_grp.o 00:07:11.611 CC lib/iscsi/tgt_node.o 00:07:11.611 CC lib/iscsi/iscsi_subsystem.o 00:07:11.611 CC lib/iscsi/iscsi_rpc.o 00:07:11.611 CC lib/iscsi/task.o 00:07:11.611 SO libspdk_ftl.so.9.0 00:07:12.182 SYMLINK libspdk_ftl.so 00:07:12.443 LIB libspdk_nvmf.a 00:07:12.443 SO libspdk_nvmf.so.20.0 00:07:12.704 LIB libspdk_vhost.a 00:07:12.704 SO libspdk_vhost.so.8.0 00:07:12.704 SYMLINK libspdk_nvmf.so 00:07:12.704 SYMLINK libspdk_vhost.so 00:07:12.965 LIB libspdk_iscsi.a 00:07:12.965 SO libspdk_iscsi.so.8.0 00:07:12.965 SYMLINK libspdk_iscsi.so 00:07:13.537 CC module/env_dpdk/env_dpdk_rpc.o 00:07:13.538 CC module/vfu_device/vfu_virtio.o 00:07:13.538 CC module/vfu_device/vfu_virtio_blk.o 00:07:13.538 CC module/vfu_device/vfu_virtio_scsi.o 00:07:13.538 CC module/vfu_device/vfu_virtio_rpc.o 00:07:13.538 CC module/vfu_device/vfu_virtio_fs.o 00:07:13.799 LIB libspdk_env_dpdk_rpc.a 00:07:13.799 CC module/accel/ioat/accel_ioat.o 00:07:13.799 CC module/accel/ioat/accel_ioat_rpc.o 00:07:13.799 CC module/accel/error/accel_error.o 00:07:13.799 CC module/accel/error/accel_error_rpc.o 00:07:13.799 CC module/accel/iaa/accel_iaa.o 00:07:13.799 CC module/accel/iaa/accel_iaa_rpc.o 00:07:13.799 CC module/accel/dsa/accel_dsa.o 00:07:13.799 CC module/accel/dsa/accel_dsa_rpc.o 00:07:13.799 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:13.799 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:13.799 CC module/sock/posix/posix.o 00:07:13.799 CC module/blob/bdev/blob_bdev.o 00:07:13.799 CC module/scheduler/gscheduler/gscheduler.o 00:07:13.799 CC module/keyring/linux/keyring.o 00:07:13.799 CC module/fsdev/aio/fsdev_aio.o 00:07:13.799 CC module/keyring/linux/keyring_rpc.o 00:07:13.799 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:13.799 CC module/keyring/file/keyring.o 00:07:13.799 CC module/fsdev/aio/linux_aio_mgr.o 00:07:13.799 CC module/keyring/file/keyring_rpc.o 00:07:13.799 SO libspdk_env_dpdk_rpc.so.6.0 00:07:14.060 SYMLINK libspdk_env_dpdk_rpc.so 00:07:14.060 LIB libspdk_scheduler_gscheduler.a 00:07:14.060 LIB libspdk_keyring_file.a 00:07:14.060 LIB libspdk_scheduler_dpdk_governor.a 00:07:14.060 LIB libspdk_keyring_linux.a 00:07:14.060 LIB libspdk_accel_ioat.a 00:07:14.060 LIB libspdk_accel_error.a 00:07:14.060 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:14.060 SO libspdk_scheduler_gscheduler.so.4.0 00:07:14.060 LIB libspdk_accel_iaa.a 00:07:14.060 SO libspdk_keyring_file.so.2.0 00:07:14.060 SO libspdk_keyring_linux.so.1.0 00:07:14.060 LIB libspdk_scheduler_dynamic.a 00:07:14.060 SO libspdk_accel_ioat.so.6.0 00:07:14.060 SO libspdk_accel_error.so.2.0 00:07:14.060 SO libspdk_accel_iaa.so.3.0 00:07:14.060 SYMLINK libspdk_scheduler_gscheduler.so 00:07:14.060 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:14.060 SO libspdk_scheduler_dynamic.so.4.0 00:07:14.060 SYMLINK libspdk_keyring_file.so 00:07:14.060 LIB libspdk_accel_dsa.a 00:07:14.060 LIB libspdk_blob_bdev.a 00:07:14.060 SYMLINK libspdk_keyring_linux.so 00:07:14.060 SYMLINK libspdk_accel_error.so 00:07:14.060 SYMLINK libspdk_accel_ioat.so 00:07:14.320 SYMLINK libspdk_accel_iaa.so 00:07:14.320 SO libspdk_accel_dsa.so.5.0 00:07:14.320 SO libspdk_blob_bdev.so.12.0 00:07:14.320 SYMLINK libspdk_scheduler_dynamic.so 00:07:14.320 LIB libspdk_vfu_device.a 00:07:14.320 SYMLINK libspdk_blob_bdev.so 00:07:14.320 SYMLINK libspdk_accel_dsa.so 00:07:14.320 SO libspdk_vfu_device.so.3.0 00:07:14.320 SYMLINK libspdk_vfu_device.so 00:07:14.581 LIB libspdk_fsdev_aio.a 00:07:14.581 SO libspdk_fsdev_aio.so.1.0 00:07:14.581 LIB libspdk_sock_posix.a 00:07:14.581 SO libspdk_sock_posix.so.6.0 00:07:14.581 SYMLINK libspdk_fsdev_aio.so 00:07:14.843 SYMLINK libspdk_sock_posix.so 00:07:14.843 CC module/bdev/lvol/vbdev_lvol.o 00:07:14.843 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:14.843 CC module/blobfs/bdev/blobfs_bdev.o 00:07:14.843 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:14.843 CC module/bdev/raid/bdev_raid.o 00:07:14.843 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:14.843 CC module/bdev/raid/bdev_raid_rpc.o 00:07:14.843 CC module/bdev/iscsi/bdev_iscsi.o 00:07:14.843 CC module/bdev/split/vbdev_split.o 00:07:14.843 CC module/bdev/error/vbdev_error.o 00:07:14.843 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:14.843 CC module/bdev/null/bdev_null.o 00:07:14.843 CC module/bdev/split/vbdev_split_rpc.o 00:07:14.843 CC module/bdev/error/vbdev_error_rpc.o 00:07:14.843 CC module/bdev/malloc/bdev_malloc.o 00:07:14.843 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:14.843 CC module/bdev/gpt/gpt.o 00:07:14.843 CC module/bdev/null/bdev_null_rpc.o 00:07:14.843 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:14.843 CC module/bdev/gpt/vbdev_gpt.o 00:07:14.843 CC module/bdev/raid/raid0.o 00:07:14.843 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:14.843 CC module/bdev/raid/raid1.o 00:07:14.843 CC module/bdev/raid/bdev_raid_sb.o 00:07:14.843 CC module/bdev/delay/vbdev_delay.o 00:07:14.843 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:14.843 CC module/bdev/raid/concat.o 00:07:14.843 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:14.843 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:14.843 CC module/bdev/nvme/bdev_nvme.o 00:07:14.843 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:14.843 CC module/bdev/nvme/nvme_rpc.o 00:07:14.843 CC module/bdev/ftl/bdev_ftl.o 00:07:14.843 CC module/bdev/passthru/vbdev_passthru.o 00:07:14.843 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:14.843 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:14.843 CC module/bdev/nvme/bdev_mdns_client.o 00:07:14.843 CC module/bdev/nvme/vbdev_opal.o 00:07:14.843 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:14.843 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:14.843 CC module/bdev/aio/bdev_aio.o 00:07:14.843 CC module/bdev/aio/bdev_aio_rpc.o 00:07:15.103 LIB libspdk_blobfs_bdev.a 00:07:15.103 LIB libspdk_bdev_split.a 00:07:15.103 SO libspdk_blobfs_bdev.so.6.0 00:07:15.103 LIB libspdk_bdev_null.a 00:07:15.103 LIB libspdk_bdev_error.a 00:07:15.103 LIB libspdk_bdev_gpt.a 00:07:15.103 SO libspdk_bdev_split.so.6.0 00:07:15.363 SO libspdk_bdev_null.so.6.0 00:07:15.363 SYMLINK libspdk_blobfs_bdev.so 00:07:15.363 LIB libspdk_bdev_ftl.a 00:07:15.364 SO libspdk_bdev_error.so.6.0 00:07:15.364 SO libspdk_bdev_gpt.so.6.0 00:07:15.364 LIB libspdk_bdev_passthru.a 00:07:15.364 LIB libspdk_bdev_aio.a 00:07:15.364 SO libspdk_bdev_ftl.so.6.0 00:07:15.364 LIB libspdk_bdev_zone_block.a 00:07:15.364 SYMLINK libspdk_bdev_split.so 00:07:15.364 LIB libspdk_bdev_iscsi.a 00:07:15.364 SYMLINK libspdk_bdev_null.so 00:07:15.364 SYMLINK libspdk_bdev_error.so 00:07:15.364 SO libspdk_bdev_passthru.so.6.0 00:07:15.364 SO libspdk_bdev_aio.so.6.0 00:07:15.364 LIB libspdk_bdev_malloc.a 00:07:15.364 SYMLINK libspdk_bdev_gpt.so 00:07:15.364 SO libspdk_bdev_zone_block.so.6.0 00:07:15.364 SO libspdk_bdev_iscsi.so.6.0 00:07:15.364 LIB libspdk_bdev_delay.a 00:07:15.364 SYMLINK libspdk_bdev_ftl.so 00:07:15.364 SO libspdk_bdev_malloc.so.6.0 00:07:15.364 LIB libspdk_bdev_lvol.a 00:07:15.364 SO libspdk_bdev_delay.so.6.0 00:07:15.364 SYMLINK libspdk_bdev_passthru.so 00:07:15.364 SYMLINK libspdk_bdev_aio.so 00:07:15.364 SYMLINK libspdk_bdev_zone_block.so 00:07:15.364 SYMLINK libspdk_bdev_iscsi.so 00:07:15.364 SO libspdk_bdev_lvol.so.6.0 00:07:15.364 SYMLINK libspdk_bdev_malloc.so 00:07:15.364 LIB libspdk_bdev_virtio.a 00:07:15.364 SYMLINK libspdk_bdev_delay.so 00:07:15.624 SO libspdk_bdev_virtio.so.6.0 00:07:15.624 SYMLINK libspdk_bdev_lvol.so 00:07:15.624 SYMLINK libspdk_bdev_virtio.so 00:07:15.885 LIB libspdk_bdev_raid.a 00:07:15.885 SO libspdk_bdev_raid.so.6.0 00:07:16.145 SYMLINK libspdk_bdev_raid.so 00:07:17.531 LIB libspdk_bdev_nvme.a 00:07:17.531 SO libspdk_bdev_nvme.so.7.1 00:07:17.531 SYMLINK libspdk_bdev_nvme.so 00:07:18.103 CC module/event/subsystems/sock/sock.o 00:07:18.103 CC module/event/subsystems/iobuf/iobuf.o 00:07:18.103 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:18.103 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:18.103 CC module/event/subsystems/scheduler/scheduler.o 00:07:18.103 CC module/event/subsystems/keyring/keyring.o 00:07:18.103 CC module/event/subsystems/vmd/vmd.o 00:07:18.103 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:18.103 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:18.103 CC module/event/subsystems/fsdev/fsdev.o 00:07:18.363 LIB libspdk_event_scheduler.a 00:07:18.363 LIB libspdk_event_sock.a 00:07:18.363 LIB libspdk_event_keyring.a 00:07:18.363 LIB libspdk_event_vhost_blk.a 00:07:18.363 LIB libspdk_event_fsdev.a 00:07:18.363 LIB libspdk_event_vfu_tgt.a 00:07:18.363 LIB libspdk_event_vmd.a 00:07:18.363 LIB libspdk_event_iobuf.a 00:07:18.363 SO libspdk_event_scheduler.so.4.0 00:07:18.363 SO libspdk_event_sock.so.5.0 00:07:18.363 SO libspdk_event_keyring.so.1.0 00:07:18.363 SO libspdk_event_vhost_blk.so.3.0 00:07:18.363 SO libspdk_event_fsdev.so.1.0 00:07:18.363 SO libspdk_event_vfu_tgt.so.3.0 00:07:18.363 SO libspdk_event_vmd.so.6.0 00:07:18.363 SO libspdk_event_iobuf.so.3.0 00:07:18.363 SYMLINK libspdk_event_scheduler.so 00:07:18.363 SYMLINK libspdk_event_sock.so 00:07:18.363 SYMLINK libspdk_event_vfu_tgt.so 00:07:18.363 SYMLINK libspdk_event_keyring.so 00:07:18.363 SYMLINK libspdk_event_vhost_blk.so 00:07:18.363 SYMLINK libspdk_event_fsdev.so 00:07:18.363 SYMLINK libspdk_event_vmd.so 00:07:18.363 SYMLINK libspdk_event_iobuf.so 00:07:18.933 CC module/event/subsystems/accel/accel.o 00:07:18.933 LIB libspdk_event_accel.a 00:07:18.933 SO libspdk_event_accel.so.6.0 00:07:19.194 SYMLINK libspdk_event_accel.so 00:07:19.454 CC module/event/subsystems/bdev/bdev.o 00:07:19.715 LIB libspdk_event_bdev.a 00:07:19.715 SO libspdk_event_bdev.so.6.0 00:07:19.715 SYMLINK libspdk_event_bdev.so 00:07:19.976 CC module/event/subsystems/scsi/scsi.o 00:07:19.976 CC module/event/subsystems/ublk/ublk.o 00:07:19.976 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:19.976 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:19.976 CC module/event/subsystems/nbd/nbd.o 00:07:20.237 LIB libspdk_event_ublk.a 00:07:20.237 LIB libspdk_event_nbd.a 00:07:20.237 LIB libspdk_event_scsi.a 00:07:20.237 SO libspdk_event_ublk.so.3.0 00:07:20.237 SO libspdk_event_nbd.so.6.0 00:07:20.237 SO libspdk_event_scsi.so.6.0 00:07:20.237 LIB libspdk_event_nvmf.a 00:07:20.237 SYMLINK libspdk_event_nbd.so 00:07:20.237 SYMLINK libspdk_event_ublk.so 00:07:20.237 SYMLINK libspdk_event_scsi.so 00:07:20.237 SO libspdk_event_nvmf.so.6.0 00:07:20.497 SYMLINK libspdk_event_nvmf.so 00:07:20.758 CC module/event/subsystems/iscsi/iscsi.o 00:07:20.758 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:20.758 LIB libspdk_event_iscsi.a 00:07:20.758 LIB libspdk_event_vhost_scsi.a 00:07:21.020 SO libspdk_event_iscsi.so.6.0 00:07:21.020 SO libspdk_event_vhost_scsi.so.3.0 00:07:21.020 SYMLINK libspdk_event_iscsi.so 00:07:21.020 SYMLINK libspdk_event_vhost_scsi.so 00:07:21.281 SO libspdk.so.6.0 00:07:21.281 SYMLINK libspdk.so 00:07:21.545 TEST_HEADER include/spdk/accel.h 00:07:21.545 TEST_HEADER include/spdk/accel_module.h 00:07:21.545 TEST_HEADER include/spdk/barrier.h 00:07:21.545 TEST_HEADER include/spdk/assert.h 00:07:21.545 TEST_HEADER include/spdk/bdev.h 00:07:21.545 TEST_HEADER include/spdk/base64.h 00:07:21.545 TEST_HEADER include/spdk/bdev_module.h 00:07:21.545 TEST_HEADER include/spdk/bdev_zone.h 00:07:21.545 TEST_HEADER include/spdk/bit_array.h 00:07:21.545 TEST_HEADER include/spdk/bit_pool.h 00:07:21.545 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:21.545 TEST_HEADER include/spdk/blob_bdev.h 00:07:21.545 CC app/trace_record/trace_record.o 00:07:21.545 TEST_HEADER include/spdk/blobfs.h 00:07:21.545 CXX app/trace/trace.o 00:07:21.545 TEST_HEADER include/spdk/blob.h 00:07:21.545 CC app/spdk_nvme_discover/discovery_aer.o 00:07:21.545 TEST_HEADER include/spdk/conf.h 00:07:21.545 TEST_HEADER include/spdk/cpuset.h 00:07:21.545 TEST_HEADER include/spdk/config.h 00:07:21.545 TEST_HEADER include/spdk/crc16.h 00:07:21.545 TEST_HEADER include/spdk/crc64.h 00:07:21.545 CC app/spdk_lspci/spdk_lspci.o 00:07:21.545 TEST_HEADER include/spdk/crc32.h 00:07:21.545 TEST_HEADER include/spdk/dif.h 00:07:21.545 CC app/spdk_top/spdk_top.o 00:07:21.545 TEST_HEADER include/spdk/dma.h 00:07:21.545 TEST_HEADER include/spdk/endian.h 00:07:21.545 CC test/rpc_client/rpc_client_test.o 00:07:21.545 TEST_HEADER include/spdk/env_dpdk.h 00:07:21.545 TEST_HEADER include/spdk/env.h 00:07:21.545 TEST_HEADER include/spdk/event.h 00:07:21.545 TEST_HEADER include/spdk/fd_group.h 00:07:21.545 TEST_HEADER include/spdk/fd.h 00:07:21.545 CC app/spdk_nvme_identify/identify.o 00:07:21.545 TEST_HEADER include/spdk/file.h 00:07:21.545 TEST_HEADER include/spdk/fsdev.h 00:07:21.545 TEST_HEADER include/spdk/fsdev_module.h 00:07:21.545 CC app/spdk_nvme_perf/perf.o 00:07:21.545 TEST_HEADER include/spdk/ftl.h 00:07:21.545 TEST_HEADER include/spdk/gpt_spec.h 00:07:21.545 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:21.545 TEST_HEADER include/spdk/hexlify.h 00:07:21.545 TEST_HEADER include/spdk/idxd.h 00:07:21.545 TEST_HEADER include/spdk/histogram_data.h 00:07:21.545 TEST_HEADER include/spdk/idxd_spec.h 00:07:21.545 TEST_HEADER include/spdk/init.h 00:07:21.545 TEST_HEADER include/spdk/ioat.h 00:07:21.545 TEST_HEADER include/spdk/ioat_spec.h 00:07:21.545 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:21.545 TEST_HEADER include/spdk/iscsi_spec.h 00:07:21.545 TEST_HEADER include/spdk/json.h 00:07:21.545 TEST_HEADER include/spdk/jsonrpc.h 00:07:21.545 TEST_HEADER include/spdk/keyring.h 00:07:21.545 TEST_HEADER include/spdk/likely.h 00:07:21.545 TEST_HEADER include/spdk/keyring_module.h 00:07:21.545 TEST_HEADER include/spdk/log.h 00:07:21.545 TEST_HEADER include/spdk/lvol.h 00:07:21.545 TEST_HEADER include/spdk/md5.h 00:07:21.545 TEST_HEADER include/spdk/mmio.h 00:07:21.545 TEST_HEADER include/spdk/memory.h 00:07:21.545 TEST_HEADER include/spdk/nbd.h 00:07:21.545 TEST_HEADER include/spdk/net.h 00:07:21.545 TEST_HEADER include/spdk/notify.h 00:07:21.545 CC app/iscsi_tgt/iscsi_tgt.o 00:07:21.545 TEST_HEADER include/spdk/nvme.h 00:07:21.545 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:21.545 TEST_HEADER include/spdk/nvme_intel.h 00:07:21.545 CC app/spdk_dd/spdk_dd.o 00:07:21.545 TEST_HEADER include/spdk/nvme_spec.h 00:07:21.545 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:21.545 TEST_HEADER include/spdk/nvme_zns.h 00:07:21.545 CC app/nvmf_tgt/nvmf_main.o 00:07:21.545 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:21.810 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:21.810 TEST_HEADER include/spdk/nvmf_spec.h 00:07:21.810 TEST_HEADER include/spdk/nvmf.h 00:07:21.810 TEST_HEADER include/spdk/opal.h 00:07:21.810 TEST_HEADER include/spdk/nvmf_transport.h 00:07:21.810 TEST_HEADER include/spdk/opal_spec.h 00:07:21.810 TEST_HEADER include/spdk/pci_ids.h 00:07:21.810 TEST_HEADER include/spdk/pipe.h 00:07:21.810 TEST_HEADER include/spdk/queue.h 00:07:21.810 TEST_HEADER include/spdk/rpc.h 00:07:21.810 TEST_HEADER include/spdk/reduce.h 00:07:21.810 CC app/spdk_tgt/spdk_tgt.o 00:07:21.810 TEST_HEADER include/spdk/scheduler.h 00:07:21.810 TEST_HEADER include/spdk/scsi.h 00:07:21.810 TEST_HEADER include/spdk/scsi_spec.h 00:07:21.810 TEST_HEADER include/spdk/stdinc.h 00:07:21.810 TEST_HEADER include/spdk/sock.h 00:07:21.810 TEST_HEADER include/spdk/thread.h 00:07:21.810 TEST_HEADER include/spdk/string.h 00:07:21.810 TEST_HEADER include/spdk/trace_parser.h 00:07:21.810 TEST_HEADER include/spdk/trace.h 00:07:21.810 TEST_HEADER include/spdk/tree.h 00:07:21.810 TEST_HEADER include/spdk/ublk.h 00:07:21.810 TEST_HEADER include/spdk/util.h 00:07:21.810 TEST_HEADER include/spdk/uuid.h 00:07:21.810 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:21.810 TEST_HEADER include/spdk/version.h 00:07:21.810 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:21.810 TEST_HEADER include/spdk/vhost.h 00:07:21.810 TEST_HEADER include/spdk/vmd.h 00:07:21.810 TEST_HEADER include/spdk/xor.h 00:07:21.810 CXX test/cpp_headers/accel.o 00:07:21.810 TEST_HEADER include/spdk/zipf.h 00:07:21.810 CXX test/cpp_headers/accel_module.o 00:07:21.810 CXX test/cpp_headers/assert.o 00:07:21.810 CXX test/cpp_headers/barrier.o 00:07:21.810 CXX test/cpp_headers/base64.o 00:07:21.810 CXX test/cpp_headers/bdev.o 00:07:21.810 CXX test/cpp_headers/bdev_module.o 00:07:21.810 CXX test/cpp_headers/bit_array.o 00:07:21.810 CXX test/cpp_headers/bdev_zone.o 00:07:21.810 CXX test/cpp_headers/bit_pool.o 00:07:21.810 CXX test/cpp_headers/blob_bdev.o 00:07:21.810 CXX test/cpp_headers/blobfs.o 00:07:21.810 CXX test/cpp_headers/blobfs_bdev.o 00:07:21.810 CXX test/cpp_headers/blob.o 00:07:21.810 CXX test/cpp_headers/conf.o 00:07:21.810 CXX test/cpp_headers/cpuset.o 00:07:21.810 CXX test/cpp_headers/crc16.o 00:07:21.810 CXX test/cpp_headers/config.o 00:07:21.810 CXX test/cpp_headers/crc32.o 00:07:21.810 CXX test/cpp_headers/crc64.o 00:07:21.810 CXX test/cpp_headers/dif.o 00:07:21.810 CXX test/cpp_headers/dma.o 00:07:21.810 CXX test/cpp_headers/endian.o 00:07:21.810 CXX test/cpp_headers/env_dpdk.o 00:07:21.810 CXX test/cpp_headers/env.o 00:07:21.810 CXX test/cpp_headers/event.o 00:07:21.810 CXX test/cpp_headers/fd_group.o 00:07:21.810 CXX test/cpp_headers/fd.o 00:07:21.810 CXX test/cpp_headers/file.o 00:07:21.810 CXX test/cpp_headers/fsdev.o 00:07:21.810 CXX test/cpp_headers/fsdev_module.o 00:07:21.810 CXX test/cpp_headers/ftl.o 00:07:21.810 CXX test/cpp_headers/fuse_dispatcher.o 00:07:21.810 CXX test/cpp_headers/gpt_spec.o 00:07:21.810 CXX test/cpp_headers/hexlify.o 00:07:21.810 CXX test/cpp_headers/idxd.o 00:07:21.810 CXX test/cpp_headers/histogram_data.o 00:07:21.810 CXX test/cpp_headers/idxd_spec.o 00:07:21.810 CXX test/cpp_headers/init.o 00:07:21.810 CXX test/cpp_headers/ioat.o 00:07:21.810 CXX test/cpp_headers/ioat_spec.o 00:07:21.810 CXX test/cpp_headers/iscsi_spec.o 00:07:21.810 CXX test/cpp_headers/json.o 00:07:21.810 CXX test/cpp_headers/jsonrpc.o 00:07:21.810 CXX test/cpp_headers/keyring.o 00:07:21.810 CXX test/cpp_headers/keyring_module.o 00:07:21.810 CXX test/cpp_headers/likely.o 00:07:21.810 CXX test/cpp_headers/log.o 00:07:21.810 CXX test/cpp_headers/memory.o 00:07:21.810 CXX test/cpp_headers/lvol.o 00:07:21.810 CXX test/cpp_headers/md5.o 00:07:21.810 CXX test/cpp_headers/mmio.o 00:07:21.810 CXX test/cpp_headers/nbd.o 00:07:21.810 CXX test/cpp_headers/net.o 00:07:21.810 CXX test/cpp_headers/nvme_ocssd.o 00:07:21.810 CXX test/cpp_headers/notify.o 00:07:21.810 CXX test/cpp_headers/nvme_spec.o 00:07:21.810 CXX test/cpp_headers/nvme_intel.o 00:07:21.810 CXX test/cpp_headers/nvme.o 00:07:21.810 CXX test/cpp_headers/nvmf_cmd.o 00:07:21.810 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:21.810 CXX test/cpp_headers/nvme_zns.o 00:07:21.810 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:21.810 CXX test/cpp_headers/nvmf.o 00:07:21.810 CXX test/cpp_headers/nvmf_spec.o 00:07:21.810 CXX test/cpp_headers/opal.o 00:07:21.810 CXX test/cpp_headers/nvmf_transport.o 00:07:21.810 CXX test/cpp_headers/opal_spec.o 00:07:21.810 CXX test/cpp_headers/pci_ids.o 00:07:21.810 CXX test/cpp_headers/queue.o 00:07:21.810 CXX test/cpp_headers/pipe.o 00:07:21.810 CC test/thread/poller_perf/poller_perf.o 00:07:21.810 CXX test/cpp_headers/reduce.o 00:07:21.810 CXX test/cpp_headers/rpc.o 00:07:21.810 CXX test/cpp_headers/scheduler.o 00:07:21.810 CXX test/cpp_headers/scsi_spec.o 00:07:21.810 CXX test/cpp_headers/sock.o 00:07:21.810 CXX test/cpp_headers/scsi.o 00:07:21.810 CXX test/cpp_headers/stdinc.o 00:07:21.810 CXX test/cpp_headers/trace_parser.o 00:07:21.810 CXX test/cpp_headers/string.o 00:07:21.810 CXX test/cpp_headers/thread.o 00:07:21.810 CXX test/cpp_headers/trace.o 00:07:21.810 CXX test/cpp_headers/tree.o 00:07:21.810 CC examples/ioat/perf/perf.o 00:07:21.810 CXX test/cpp_headers/ublk.o 00:07:21.810 CXX test/cpp_headers/util.o 00:07:21.810 CXX test/cpp_headers/uuid.o 00:07:21.810 CXX test/cpp_headers/vfio_user_pci.o 00:07:21.810 CXX test/cpp_headers/version.o 00:07:21.810 CC examples/util/zipf/zipf.o 00:07:21.810 CXX test/cpp_headers/vfio_user_spec.o 00:07:21.810 CXX test/cpp_headers/vmd.o 00:07:21.810 CXX test/cpp_headers/vhost.o 00:07:21.810 LINK spdk_lspci 00:07:21.810 CXX test/cpp_headers/zipf.o 00:07:21.810 CXX test/cpp_headers/xor.o 00:07:21.810 CC test/env/pci/pci_ut.o 00:07:22.077 CC app/fio/nvme/fio_plugin.o 00:07:22.077 CC test/app/histogram_perf/histogram_perf.o 00:07:22.077 CC test/env/vtophys/vtophys.o 00:07:22.077 CC test/app/jsoncat/jsoncat.o 00:07:22.077 CC test/app/stub/stub.o 00:07:22.077 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:22.077 CC test/env/memory/memory_ut.o 00:07:22.077 CC test/dma/test_dma/test_dma.o 00:07:22.077 CC test/app/bdev_svc/bdev_svc.o 00:07:22.077 CC examples/ioat/verify/verify.o 00:07:22.077 CC app/fio/bdev/fio_plugin.o 00:07:22.077 LINK spdk_nvme_discover 00:07:22.342 LINK interrupt_tgt 00:07:22.342 LINK rpc_client_test 00:07:22.342 LINK nvmf_tgt 00:07:22.603 LINK iscsi_tgt 00:07:22.603 LINK spdk_trace_record 00:07:22.603 LINK spdk_tgt 00:07:22.603 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:22.603 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:22.603 CC test/env/mem_callbacks/mem_callbacks.o 00:07:22.603 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:22.603 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:22.603 LINK env_dpdk_post_init 00:07:22.603 LINK jsoncat 00:07:22.863 LINK zipf 00:07:22.863 LINK spdk_dd 00:07:22.863 LINK ioat_perf 00:07:22.863 LINK vtophys 00:07:22.863 LINK poller_perf 00:07:22.863 LINK histogram_perf 00:07:23.124 LINK bdev_svc 00:07:23.124 LINK stub 00:07:23.124 LINK spdk_trace 00:07:23.124 LINK verify 00:07:23.124 LINK spdk_top 00:07:23.384 LINK vhost_fuzz 00:07:23.384 LINK nvme_fuzz 00:07:23.384 LINK pci_ut 00:07:23.384 LINK test_dma 00:07:23.384 LINK spdk_bdev 00:07:23.384 CC examples/vmd/lsvmd/lsvmd.o 00:07:23.384 LINK spdk_nvme 00:07:23.384 CC examples/vmd/led/led.o 00:07:23.384 CC examples/idxd/perf/perf.o 00:07:23.384 CC examples/sock/hello_world/hello_sock.o 00:07:23.384 CC examples/thread/thread/thread_ex.o 00:07:23.384 CC test/event/event_perf/event_perf.o 00:07:23.384 CC test/event/reactor_perf/reactor_perf.o 00:07:23.384 LINK mem_callbacks 00:07:23.384 CC test/event/reactor/reactor.o 00:07:23.384 LINK spdk_nvme_perf 00:07:23.646 CC test/event/app_repeat/app_repeat.o 00:07:23.646 CC app/vhost/vhost.o 00:07:23.646 CC test/event/scheduler/scheduler.o 00:07:23.646 LINK lsvmd 00:07:23.646 LINK led 00:07:23.646 LINK spdk_nvme_identify 00:07:23.646 LINK event_perf 00:07:23.646 LINK reactor_perf 00:07:23.646 LINK reactor 00:07:23.646 LINK hello_sock 00:07:23.646 LINK idxd_perf 00:07:23.646 LINK app_repeat 00:07:23.646 LINK thread 00:07:23.908 LINK vhost 00:07:23.908 LINK scheduler 00:07:23.908 CC test/nvme/e2edp/nvme_dp.o 00:07:23.908 CC test/nvme/sgl/sgl.o 00:07:23.908 CC test/nvme/fused_ordering/fused_ordering.o 00:07:23.908 CC test/nvme/cuse/cuse.o 00:07:23.908 CC test/nvme/overhead/overhead.o 00:07:23.908 CC test/nvme/connect_stress/connect_stress.o 00:07:23.908 CC test/nvme/aer/aer.o 00:07:23.909 CC test/nvme/err_injection/err_injection.o 00:07:23.909 CC test/nvme/simple_copy/simple_copy.o 00:07:23.909 CC test/nvme/reserve/reserve.o 00:07:23.909 LINK memory_ut 00:07:23.909 CC test/nvme/reset/reset.o 00:07:23.909 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:23.909 CC test/nvme/boot_partition/boot_partition.o 00:07:23.909 CC test/nvme/compliance/nvme_compliance.o 00:07:23.909 CC test/nvme/startup/startup.o 00:07:23.909 CC test/blobfs/mkfs/mkfs.o 00:07:23.909 CC test/nvme/fdp/fdp.o 00:07:23.909 CC test/accel/dif/dif.o 00:07:24.168 CC test/lvol/esnap/esnap.o 00:07:24.168 CC examples/nvme/arbitration/arbitration.o 00:07:24.168 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:24.168 LINK boot_partition 00:07:24.168 CC examples/nvme/reconnect/reconnect.o 00:07:24.168 CC examples/nvme/hello_world/hello_world.o 00:07:24.168 LINK connect_stress 00:07:24.168 LINK startup 00:07:24.168 LINK err_injection 00:07:24.168 CC examples/nvme/hotplug/hotplug.o 00:07:24.168 CC examples/nvme/abort/abort.o 00:07:24.168 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:24.168 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:24.169 LINK fused_ordering 00:07:24.169 LINK reserve 00:07:24.169 LINK doorbell_aers 00:07:24.169 LINK sgl 00:07:24.169 LINK mkfs 00:07:24.169 LINK simple_copy 00:07:24.429 LINK iscsi_fuzz 00:07:24.429 LINK reset 00:07:24.429 LINK aer 00:07:24.429 LINK nvme_dp 00:07:24.429 LINK overhead 00:07:24.429 LINK nvme_compliance 00:07:24.429 LINK fdp 00:07:24.429 CC examples/accel/perf/accel_perf.o 00:07:24.429 CC examples/blob/hello_world/hello_blob.o 00:07:24.429 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:24.429 CC examples/blob/cli/blobcli.o 00:07:24.429 LINK pmr_persistence 00:07:24.429 LINK cmb_copy 00:07:24.429 LINK hello_world 00:07:24.429 LINK hotplug 00:07:24.689 LINK arbitration 00:07:24.689 LINK reconnect 00:07:24.689 LINK abort 00:07:24.689 LINK dif 00:07:24.689 LINK hello_blob 00:07:24.689 LINK nvme_manage 00:07:24.689 LINK hello_fsdev 00:07:24.950 LINK accel_perf 00:07:24.950 LINK blobcli 00:07:25.210 LINK cuse 00:07:25.210 CC test/bdev/bdevio/bdevio.o 00:07:25.471 CC examples/bdev/hello_world/hello_bdev.o 00:07:25.471 CC examples/bdev/bdevperf/bdevperf.o 00:07:25.731 LINK bdevio 00:07:25.731 LINK hello_bdev 00:07:26.302 LINK bdevperf 00:07:26.872 CC examples/nvmf/nvmf/nvmf.o 00:07:27.132 LINK nvmf 00:07:28.532 LINK esnap 00:07:28.793 00:07:28.793 real 0m56.592s 00:07:28.793 user 8m6.612s 00:07:28.793 sys 5m35.343s 00:07:28.793 14:01:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:28.793 14:01:17 make -- common/autotest_common.sh@10 -- $ set +x 00:07:28.793 ************************************ 00:07:28.793 END TEST make 00:07:28.793 ************************************ 00:07:29.053 14:01:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:29.053 14:01:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:29.053 14:01:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:29.053 14:01:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.053 14:01:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:29.053 14:01:17 -- pm/common@44 -- $ pid=2500715 00:07:29.053 14:01:17 -- pm/common@50 -- $ kill -TERM 2500715 00:07:29.053 14:01:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.053 14:01:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:29.053 14:01:17 -- pm/common@44 -- $ pid=2500716 00:07:29.053 14:01:17 -- pm/common@50 -- $ kill -TERM 2500716 00:07:29.053 14:01:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.053 14:01:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:29.053 14:01:17 -- pm/common@44 -- $ pid=2500718 00:07:29.053 14:01:17 -- pm/common@50 -- $ kill -TERM 2500718 00:07:29.053 14:01:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.053 14:01:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:29.053 14:01:17 -- pm/common@44 -- $ pid=2500741 00:07:29.053 14:01:17 -- pm/common@50 -- $ sudo -E kill -TERM 2500741 00:07:29.053 14:01:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:29.053 14:01:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:29.053 14:01:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.053 14:01:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.053 14:01:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.053 14:01:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.053 14:01:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.316 14:01:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.316 14:01:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.316 14:01:17 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.316 14:01:17 -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.316 14:01:17 -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.316 14:01:17 -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.316 14:01:17 -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.316 14:01:17 -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.316 14:01:17 -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.316 14:01:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.316 14:01:17 -- scripts/common.sh@344 -- # case "$op" in 00:07:29.316 14:01:17 -- scripts/common.sh@345 -- # : 1 00:07:29.316 14:01:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.316 14:01:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.316 14:01:17 -- scripts/common.sh@365 -- # decimal 1 00:07:29.316 14:01:17 -- scripts/common.sh@353 -- # local d=1 00:07:29.316 14:01:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.316 14:01:17 -- scripts/common.sh@355 -- # echo 1 00:07:29.316 14:01:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.316 14:01:17 -- scripts/common.sh@366 -- # decimal 2 00:07:29.316 14:01:17 -- scripts/common.sh@353 -- # local d=2 00:07:29.316 14:01:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.316 14:01:17 -- scripts/common.sh@355 -- # echo 2 00:07:29.316 14:01:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.316 14:01:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.316 14:01:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.316 14:01:17 -- scripts/common.sh@368 -- # return 0 00:07:29.316 14:01:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.316 14:01:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.316 --rc genhtml_branch_coverage=1 00:07:29.316 --rc genhtml_function_coverage=1 00:07:29.316 --rc genhtml_legend=1 00:07:29.316 --rc geninfo_all_blocks=1 00:07:29.316 --rc geninfo_unexecuted_blocks=1 00:07:29.316 00:07:29.316 ' 00:07:29.316 14:01:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.316 --rc genhtml_branch_coverage=1 00:07:29.316 --rc genhtml_function_coverage=1 00:07:29.316 --rc genhtml_legend=1 00:07:29.316 --rc geninfo_all_blocks=1 00:07:29.316 --rc geninfo_unexecuted_blocks=1 00:07:29.316 00:07:29.316 ' 00:07:29.316 14:01:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.316 --rc genhtml_branch_coverage=1 00:07:29.316 --rc genhtml_function_coverage=1 00:07:29.316 --rc genhtml_legend=1 00:07:29.316 --rc geninfo_all_blocks=1 00:07:29.316 --rc geninfo_unexecuted_blocks=1 00:07:29.316 00:07:29.316 ' 00:07:29.316 14:01:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.316 --rc genhtml_branch_coverage=1 00:07:29.316 --rc genhtml_function_coverage=1 00:07:29.316 --rc genhtml_legend=1 00:07:29.316 --rc geninfo_all_blocks=1 00:07:29.316 --rc geninfo_unexecuted_blocks=1 00:07:29.316 00:07:29.316 ' 00:07:29.316 14:01:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.316 14:01:17 -- nvmf/common.sh@7 -- # uname -s 00:07:29.316 14:01:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.316 14:01:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.316 14:01:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.316 14:01:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.316 14:01:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.316 14:01:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.316 14:01:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.316 14:01:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.316 14:01:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.316 14:01:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.316 14:01:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.316 14:01:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:29.316 14:01:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.316 14:01:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.316 14:01:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.316 14:01:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.316 14:01:17 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.316 14:01:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.316 14:01:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.316 14:01:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.316 14:01:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.316 14:01:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.316 14:01:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.316 14:01:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.316 14:01:17 -- paths/export.sh@5 -- # export PATH 00:07:29.316 14:01:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.316 14:01:17 -- nvmf/common.sh@51 -- # : 0 00:07:29.316 14:01:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.316 14:01:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.316 14:01:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.316 14:01:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.316 14:01:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.316 14:01:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.316 14:01:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.316 14:01:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.316 14:01:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.316 14:01:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:29.316 14:01:17 -- spdk/autotest.sh@32 -- # uname -s 00:07:29.316 14:01:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:29.316 14:01:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:29.316 14:01:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:29.316 14:01:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:29.316 14:01:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:29.316 14:01:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:29.316 14:01:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:29.316 14:01:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:29.316 14:01:17 -- spdk/autotest.sh@48 -- # udevadm_pid=2566843 00:07:29.316 14:01:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:29.316 14:01:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:29.316 14:01:17 -- pm/common@17 -- # local monitor 00:07:29.316 14:01:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.316 14:01:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.316 14:01:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.316 14:01:17 -- pm/common@21 -- # date +%s 00:07:29.316 14:01:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:29.316 14:01:17 -- pm/common@25 -- # sleep 1 00:07:29.316 14:01:17 -- pm/common@21 -- # date +%s 00:07:29.316 14:01:17 -- pm/common@21 -- # date +%s 00:07:29.316 14:01:17 -- pm/common@21 -- # date +%s 00:07:29.316 14:01:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:07:29.316 14:01:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:07:29.316 14:01:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:07:29.316 14:01:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:07:29.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733490077_collect-cpu-load.pm.log 00:07:29.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733490077_collect-vmstat.pm.log 00:07:29.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733490077_collect-cpu-temp.pm.log 00:07:29.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733490077_collect-bmc-pm.bmc.pm.log 00:07:30.259 14:01:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:30.259 14:01:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:30.259 14:01:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.259 14:01:18 -- common/autotest_common.sh@10 -- # set +x 00:07:30.259 14:01:18 -- spdk/autotest.sh@59 -- # create_test_list 00:07:30.259 14:01:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:30.259 14:01:18 -- common/autotest_common.sh@10 -- # set +x 00:07:30.259 14:01:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:30.259 14:01:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.259 14:01:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.259 14:01:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:30.259 14:01:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.259 14:01:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:30.259 14:01:18 -- common/autotest_common.sh@1457 -- # uname 00:07:30.259 14:01:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:30.259 14:01:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:30.259 14:01:18 -- common/autotest_common.sh@1477 -- # uname 00:07:30.259 14:01:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:30.259 14:01:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:30.259 14:01:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:30.534 lcov: LCOV version 1.15 00:07:30.534 14:01:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:52.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:52.591 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:02.585 14:01:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:02.585 14:01:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.585 14:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:02.585 14:01:49 -- spdk/autotest.sh@78 -- # rm -f 00:08:02.585 14:01:49 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:04.522 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:65:00.0 (144d a80a): Already using the nvme driver 00:08:04.522 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:08:04.522 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:08:04.782 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:08:04.782 14:01:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:04.782 14:01:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:04.782 14:01:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:04.782 14:01:53 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:04.782 14:01:53 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:04.782 14:01:53 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:04.782 14:01:53 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:04.782 14:01:53 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:08:04.782 14:01:53 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:04.782 14:01:53 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:04.782 14:01:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:04.782 14:01:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:04.782 14:01:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:04.782 14:01:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:04.782 14:01:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:04.782 14:01:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:04.782 14:01:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:04.782 14:01:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:04.782 14:01:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:04.782 No valid GPT data, bailing 00:08:04.782 14:01:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:04.782 14:01:53 -- scripts/common.sh@394 -- # pt= 00:08:04.782 14:01:53 -- scripts/common.sh@395 -- # return 1 00:08:04.782 14:01:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:04.782 1+0 records in 00:08:04.782 1+0 records out 00:08:04.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499061 s, 210 MB/s 00:08:04.782 14:01:53 -- spdk/autotest.sh@105 -- # sync 00:08:04.782 14:01:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:04.782 14:01:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:04.782 14:01:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:14.773 14:02:01 -- spdk/autotest.sh@111 -- # uname -s 00:08:14.773 14:02:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:14.773 14:02:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:14.773 14:02:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:17.315 Hugepages 00:08:17.315 node hugesize free / total 00:08:17.315 node0 1048576kB 0 / 0 00:08:17.315 node0 2048kB 0 / 0 00:08:17.315 node1 1048576kB 0 / 0 00:08:17.315 node1 2048kB 0 / 0 00:08:17.315 00:08:17.315 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:17.315 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:08:17.315 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:08:17.315 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:08:17.315 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:08:17.315 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:08:17.315 14:02:05 -- spdk/autotest.sh@117 -- # uname -s 00:08:17.315 14:02:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:17.315 14:02:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:17.315 14:02:05 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:20.609 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:20.609 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:20.609 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:20.610 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:22.575 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:22.575 14:02:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:23.518 14:02:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:23.518 14:02:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:23.518 14:02:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.518 14:02:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:23.518 14:02:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:23.518 14:02:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:23.518 14:02:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.518 14:02:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:23.518 14:02:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:23.518 14:02:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:23.518 14:02:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:23.518 14:02:12 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:27.731 Waiting for block devices as requested 00:08:27.731 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:27.731 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:27.992 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:27.992 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:27.992 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:27.992 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:28.253 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:28.253 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:28.253 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:28.514 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:28.514 14:02:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:28.514 14:02:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:08:28.514 14:02:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:28.514 14:02:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:28.514 14:02:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:28.514 14:02:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:28.514 14:02:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:08:28.514 14:02:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:28.514 14:02:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:28.514 14:02:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:28.514 14:02:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:28.514 14:02:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:28.514 14:02:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:28.514 14:02:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:28.514 14:02:16 -- common/autotest_common.sh@1543 -- # continue 00:08:28.514 14:02:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:28.514 14:02:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.514 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:08:28.514 14:02:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:28.514 14:02:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.514 14:02:17 -- common/autotest_common.sh@10 -- # set +x 00:08:28.514 14:02:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:31.811 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:31.811 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:31.811 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:31.811 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:31.811 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:32.071 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:32.071 14:02:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:32.071 14:02:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.071 14:02:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.331 14:02:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:32.331 14:02:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:32.331 14:02:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:32.331 14:02:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:32.331 14:02:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:32.331 14:02:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:32.331 14:02:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:32.331 14:02:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:32.331 14:02:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:32.331 14:02:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:32.331 14:02:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:32.331 14:02:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:32.331 14:02:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:32.331 14:02:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:32.331 14:02:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:32.331 14:02:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:32.331 14:02:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:32.331 14:02:20 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:08:32.331 14:02:20 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:32.331 14:02:20 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:32.331 14:02:20 -- common/autotest_common.sh@1572 -- # return 0 00:08:32.331 14:02:20 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:32.331 14:02:20 -- common/autotest_common.sh@1580 -- # return 0 00:08:32.331 14:02:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:32.331 14:02:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:32.331 14:02:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:32.332 14:02:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:32.332 14:02:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:32.332 14:02:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.332 14:02:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.332 14:02:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:32.332 14:02:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:32.332 14:02:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.332 14:02:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.332 14:02:20 -- common/autotest_common.sh@10 -- # set +x 00:08:32.332 ************************************ 00:08:32.332 START TEST env 00:08:32.332 ************************************ 00:08:32.332 14:02:20 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:32.592 * Looking for test storage... 00:08:32.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:32.592 14:02:20 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:32.592 14:02:20 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:32.592 14:02:20 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:32.592 14:02:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:32.592 14:02:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.592 14:02:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.592 14:02:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.592 14:02:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.592 14:02:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.592 14:02:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.592 14:02:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.592 14:02:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.592 14:02:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.592 14:02:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.592 14:02:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.592 14:02:21 env -- scripts/common.sh@344 -- # case "$op" in 00:08:32.592 14:02:21 env -- scripts/common.sh@345 -- # : 1 00:08:32.592 14:02:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.592 14:02:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.592 14:02:21 env -- scripts/common.sh@365 -- # decimal 1 00:08:32.592 14:02:21 env -- scripts/common.sh@353 -- # local d=1 00:08:32.592 14:02:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.592 14:02:21 env -- scripts/common.sh@355 -- # echo 1 00:08:32.592 14:02:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.592 14:02:21 env -- scripts/common.sh@366 -- # decimal 2 00:08:32.592 14:02:21 env -- scripts/common.sh@353 -- # local d=2 00:08:32.592 14:02:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.592 14:02:21 env -- scripts/common.sh@355 -- # echo 2 00:08:32.592 14:02:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.592 14:02:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.592 14:02:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.592 14:02:21 env -- scripts/common.sh@368 -- # return 0 00:08:32.592 14:02:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.592 14:02:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.592 --rc genhtml_legend=1 00:08:32.592 --rc geninfo_all_blocks=1 00:08:32.592 --rc geninfo_unexecuted_blocks=1 00:08:32.592 00:08:32.592 ' 00:08:32.592 14:02:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.592 --rc genhtml_branch_coverage=1 00:08:32.592 --rc genhtml_function_coverage=1 00:08:32.593 --rc genhtml_legend=1 00:08:32.593 --rc geninfo_all_blocks=1 00:08:32.593 --rc geninfo_unexecuted_blocks=1 00:08:32.593 00:08:32.593 ' 00:08:32.593 14:02:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:32.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.593 --rc genhtml_branch_coverage=1 00:08:32.593 --rc genhtml_function_coverage=1 00:08:32.593 --rc genhtml_legend=1 00:08:32.593 --rc geninfo_all_blocks=1 00:08:32.593 --rc geninfo_unexecuted_blocks=1 00:08:32.593 00:08:32.593 ' 00:08:32.593 14:02:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:32.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.593 --rc genhtml_branch_coverage=1 00:08:32.593 --rc genhtml_function_coverage=1 00:08:32.593 --rc genhtml_legend=1 00:08:32.593 --rc geninfo_all_blocks=1 00:08:32.593 --rc geninfo_unexecuted_blocks=1 00:08:32.593 00:08:32.593 ' 00:08:32.593 14:02:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:32.593 14:02:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.593 14:02:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.593 14:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:08:32.593 ************************************ 00:08:32.593 START TEST env_memory 00:08:32.593 ************************************ 00:08:32.593 14:02:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:32.593 00:08:32.593 00:08:32.593 CUnit - A unit testing framework for C - Version 2.1-3 00:08:32.593 http://cunit.sourceforge.net/ 00:08:32.593 00:08:32.593 00:08:32.593 Suite: memory 00:08:32.593 Test: alloc and free memory map ...[2024-12-06 14:02:21.185990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:32.593 passed 00:08:32.593 Test: mem map translation ...[2024-12-06 14:02:21.211592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:32.593 [2024-12-06 14:02:21.211622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:32.593 [2024-12-06 14:02:21.211668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:32.593 [2024-12-06 14:02:21.211681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:32.854 passed 00:08:32.854 Test: mem map registration ...[2024-12-06 14:02:21.267016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:32.854 [2024-12-06 14:02:21.267050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:32.854 passed 00:08:32.854 Test: mem map adjacent registrations ...passed 00:08:32.854 00:08:32.854 Run Summary: Type Total Ran Passed Failed Inactive 00:08:32.854 suites 1 1 n/a 0 0 00:08:32.854 tests 4 4 4 0 0 00:08:32.854 asserts 152 152 152 0 n/a 00:08:32.854 00:08:32.854 Elapsed time = 0.191 seconds 00:08:32.854 00:08:32.854 real 0m0.206s 00:08:32.854 user 0m0.195s 00:08:32.854 sys 0m0.010s 00:08:32.854 14:02:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.854 14:02:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 ************************************ 00:08:32.854 END TEST env_memory 00:08:32.854 ************************************ 00:08:32.854 14:02:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:32.854 14:02:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.854 14:02:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.854 14:02:21 env -- common/autotest_common.sh@10 -- # set +x 00:08:32.854 ************************************ 00:08:32.854 START TEST env_vtophys 00:08:32.854 ************************************ 00:08:32.854 14:02:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:32.854 EAL: lib.eal log level changed from notice to debug 00:08:32.854 EAL: Detected lcore 0 as core 0 on socket 0 00:08:32.854 EAL: Detected lcore 1 as core 1 on socket 0 00:08:32.854 EAL: Detected lcore 2 as core 2 on socket 0 00:08:32.854 EAL: Detected lcore 3 as core 3 on socket 0 00:08:32.854 EAL: Detected lcore 4 as core 4 on socket 0 00:08:32.854 EAL: Detected lcore 5 as core 5 on socket 0 00:08:32.854 EAL: Detected lcore 6 as core 6 on socket 0 00:08:32.854 EAL: Detected lcore 7 as core 7 on socket 0 00:08:32.854 EAL: Detected lcore 8 as core 8 on socket 0 00:08:32.854 EAL: Detected lcore 9 as core 9 on socket 0 00:08:32.854 EAL: Detected lcore 10 as core 10 on socket 0 00:08:32.854 EAL: Detected lcore 11 as core 11 on socket 0 00:08:32.854 EAL: Detected lcore 12 as core 12 on socket 0 00:08:32.854 EAL: Detected lcore 13 as core 13 on socket 0 00:08:32.854 EAL: Detected lcore 14 as core 14 on socket 0 00:08:32.854 EAL: Detected lcore 15 as core 15 on socket 0 00:08:32.854 EAL: Detected lcore 16 as core 16 on socket 0 00:08:32.854 EAL: Detected lcore 17 as core 17 on socket 0 00:08:32.854 EAL: Detected lcore 18 as core 18 on socket 0 00:08:32.854 EAL: Detected lcore 19 as core 19 on socket 0 00:08:32.854 EAL: Detected lcore 20 as core 20 on socket 0 00:08:32.854 EAL: Detected lcore 21 as core 21 on socket 0 00:08:32.854 EAL: Detected lcore 22 as core 22 on socket 0 00:08:32.854 EAL: Detected lcore 23 as core 23 on socket 0 00:08:32.854 EAL: Detected lcore 24 as core 24 on socket 0 00:08:32.854 EAL: Detected lcore 25 as core 25 on socket 0 00:08:32.854 EAL: Detected lcore 26 as core 26 on socket 0 00:08:32.854 EAL: Detected lcore 27 as core 27 on socket 0 00:08:32.854 EAL: Detected lcore 28 as core 28 on socket 0 00:08:32.854 EAL: Detected lcore 29 as core 29 on socket 0 00:08:32.854 EAL: Detected lcore 30 as core 30 on socket 0 00:08:32.854 EAL: Detected lcore 31 as core 31 on socket 0 00:08:32.854 EAL: Detected lcore 32 as core 32 on socket 0 00:08:32.854 EAL: Detected lcore 33 as core 33 on socket 0 00:08:32.854 EAL: Detected lcore 34 as core 34 on socket 0 00:08:32.854 EAL: Detected lcore 35 as core 35 on socket 0 00:08:32.854 EAL: Detected lcore 36 as core 0 on socket 1 00:08:32.854 EAL: Detected lcore 37 as core 1 on socket 1 00:08:32.854 EAL: Detected lcore 38 as core 2 on socket 1 00:08:32.854 EAL: Detected lcore 39 as core 3 on socket 1 00:08:32.854 EAL: Detected lcore 40 as core 4 on socket 1 00:08:32.854 EAL: Detected lcore 41 as core 5 on socket 1 00:08:32.854 EAL: Detected lcore 42 as core 6 on socket 1 00:08:32.854 EAL: Detected lcore 43 as core 7 on socket 1 00:08:32.854 EAL: Detected lcore 44 as core 8 on socket 1 00:08:32.854 EAL: Detected lcore 45 as core 9 on socket 1 00:08:32.854 EAL: Detected lcore 46 as core 10 on socket 1 00:08:32.854 EAL: Detected lcore 47 as core 11 on socket 1 00:08:32.854 EAL: Detected lcore 48 as core 12 on socket 1 00:08:32.854 EAL: Detected lcore 49 as core 13 on socket 1 00:08:32.854 EAL: Detected lcore 50 as core 14 on socket 1 00:08:32.854 EAL: Detected lcore 51 as core 15 on socket 1 00:08:32.854 EAL: Detected lcore 52 as core 16 on socket 1 00:08:32.854 EAL: Detected lcore 53 as core 17 on socket 1 00:08:32.854 EAL: Detected lcore 54 as core 18 on socket 1 00:08:32.854 EAL: Detected lcore 55 as core 19 on socket 1 00:08:32.854 EAL: Detected lcore 56 as core 20 on socket 1 00:08:32.855 EAL: Detected lcore 57 as core 21 on socket 1 00:08:32.855 EAL: Detected lcore 58 as core 22 on socket 1 00:08:32.855 EAL: Detected lcore 59 as core 23 on socket 1 00:08:32.855 EAL: Detected lcore 60 as core 24 on socket 1 00:08:32.855 EAL: Detected lcore 61 as core 25 on socket 1 00:08:32.855 EAL: Detected lcore 62 as core 26 on socket 1 00:08:32.855 EAL: Detected lcore 63 as core 27 on socket 1 00:08:32.855 EAL: Detected lcore 64 as core 28 on socket 1 00:08:32.855 EAL: Detected lcore 65 as core 29 on socket 1 00:08:32.855 EAL: Detected lcore 66 as core 30 on socket 1 00:08:32.855 EAL: Detected lcore 67 as core 31 on socket 1 00:08:32.855 EAL: Detected lcore 68 as core 32 on socket 1 00:08:32.855 EAL: Detected lcore 69 as core 33 on socket 1 00:08:32.855 EAL: Detected lcore 70 as core 34 on socket 1 00:08:32.855 EAL: Detected lcore 71 as core 35 on socket 1 00:08:32.855 EAL: Detected lcore 72 as core 0 on socket 0 00:08:32.855 EAL: Detected lcore 73 as core 1 on socket 0 00:08:32.855 EAL: Detected lcore 74 as core 2 on socket 0 00:08:32.855 EAL: Detected lcore 75 as core 3 on socket 0 00:08:32.855 EAL: Detected lcore 76 as core 4 on socket 0 00:08:32.855 EAL: Detected lcore 77 as core 5 on socket 0 00:08:32.855 EAL: Detected lcore 78 as core 6 on socket 0 00:08:32.855 EAL: Detected lcore 79 as core 7 on socket 0 00:08:32.855 EAL: Detected lcore 80 as core 8 on socket 0 00:08:32.855 EAL: Detected lcore 81 as core 9 on socket 0 00:08:32.855 EAL: Detected lcore 82 as core 10 on socket 0 00:08:32.855 EAL: Detected lcore 83 as core 11 on socket 0 00:08:32.855 EAL: Detected lcore 84 as core 12 on socket 0 00:08:32.855 EAL: Detected lcore 85 as core 13 on socket 0 00:08:32.855 EAL: Detected lcore 86 as core 14 on socket 0 00:08:32.855 EAL: Detected lcore 87 as core 15 on socket 0 00:08:32.855 EAL: Detected lcore 88 as core 16 on socket 0 00:08:32.855 EAL: Detected lcore 89 as core 17 on socket 0 00:08:32.855 EAL: Detected lcore 90 as core 18 on socket 0 00:08:32.855 EAL: Detected lcore 91 as core 19 on socket 0 00:08:32.855 EAL: Detected lcore 92 as core 20 on socket 0 00:08:32.855 EAL: Detected lcore 93 as core 21 on socket 0 00:08:32.855 EAL: Detected lcore 94 as core 22 on socket 0 00:08:32.855 EAL: Detected lcore 95 as core 23 on socket 0 00:08:32.855 EAL: Detected lcore 96 as core 24 on socket 0 00:08:32.855 EAL: Detected lcore 97 as core 25 on socket 0 00:08:32.855 EAL: Detected lcore 98 as core 26 on socket 0 00:08:32.855 EAL: Detected lcore 99 as core 27 on socket 0 00:08:32.855 EAL: Detected lcore 100 as core 28 on socket 0 00:08:32.855 EAL: Detected lcore 101 as core 29 on socket 0 00:08:32.855 EAL: Detected lcore 102 as core 30 on socket 0 00:08:32.855 EAL: Detected lcore 103 as core 31 on socket 0 00:08:32.855 EAL: Detected lcore 104 as core 32 on socket 0 00:08:32.855 EAL: Detected lcore 105 as core 33 on socket 0 00:08:32.855 EAL: Detected lcore 106 as core 34 on socket 0 00:08:32.855 EAL: Detected lcore 107 as core 35 on socket 0 00:08:32.855 EAL: Detected lcore 108 as core 0 on socket 1 00:08:32.855 EAL: Detected lcore 109 as core 1 on socket 1 00:08:32.855 EAL: Detected lcore 110 as core 2 on socket 1 00:08:32.855 EAL: Detected lcore 111 as core 3 on socket 1 00:08:32.855 EAL: Detected lcore 112 as core 4 on socket 1 00:08:32.855 EAL: Detected lcore 113 as core 5 on socket 1 00:08:32.855 EAL: Detected lcore 114 as core 6 on socket 1 00:08:32.855 EAL: Detected lcore 115 as core 7 on socket 1 00:08:32.855 EAL: Detected lcore 116 as core 8 on socket 1 00:08:32.855 EAL: Detected lcore 117 as core 9 on socket 1 00:08:32.855 EAL: Detected lcore 118 as core 10 on socket 1 00:08:32.855 EAL: Detected lcore 119 as core 11 on socket 1 00:08:32.855 EAL: Detected lcore 120 as core 12 on socket 1 00:08:32.855 EAL: Detected lcore 121 as core 13 on socket 1 00:08:32.855 EAL: Detected lcore 122 as core 14 on socket 1 00:08:32.855 EAL: Detected lcore 123 as core 15 on socket 1 00:08:32.855 EAL: Detected lcore 124 as core 16 on socket 1 00:08:32.855 EAL: Detected lcore 125 as core 17 on socket 1 00:08:32.855 EAL: Detected lcore 126 as core 18 on socket 1 00:08:32.855 EAL: Detected lcore 127 as core 19 on socket 1 00:08:32.855 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:32.855 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:32.855 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:32.855 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:32.855 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:32.855 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:32.855 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:32.855 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:32.855 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:32.855 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:32.855 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:32.855 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:32.855 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:32.855 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:32.855 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:32.855 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:32.855 EAL: Maximum logical cores by configuration: 128 00:08:32.855 EAL: Detected CPU lcores: 128 00:08:32.855 EAL: Detected NUMA nodes: 2 00:08:32.855 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:32.855 EAL: Detected shared linkage of DPDK 00:08:32.855 EAL: No shared files mode enabled, IPC will be disabled 00:08:32.855 EAL: Bus pci wants IOVA as 'DC' 00:08:32.855 EAL: Buses did not request a specific IOVA mode. 00:08:32.855 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:32.855 EAL: Selected IOVA mode 'VA' 00:08:32.855 EAL: Probing VFIO support... 00:08:32.855 EAL: IOMMU type 1 (Type 1) is supported 00:08:32.855 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:32.855 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:32.855 EAL: VFIO support initialized 00:08:32.855 EAL: Ask a virtual area of 0x2e000 bytes 00:08:32.855 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:32.855 EAL: Setting up physically contiguous memory... 00:08:32.855 EAL: Setting maximum number of open files to 524288 00:08:32.855 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:32.855 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:32.855 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:32.855 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:32.855 EAL: Ask a virtual area of 0x61000 bytes 00:08:32.855 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:32.855 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:32.855 EAL: Ask a virtual area of 0x400000000 bytes 00:08:32.855 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:32.855 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:32.855 EAL: Hugepages will be freed exactly as allocated. 00:08:32.855 EAL: No shared files mode enabled, IPC is disabled 00:08:32.855 EAL: No shared files mode enabled, IPC is disabled 00:08:32.855 EAL: TSC frequency is ~2400000 KHz 00:08:32.855 EAL: Main lcore 0 is ready (tid=7fe5cf8f9a00;cpuset=[0]) 00:08:32.855 EAL: Trying to obtain current memory policy. 00:08:32.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:32.855 EAL: Restoring previous memory policy: 0 00:08:32.855 EAL: request: mp_malloc_sync 00:08:32.855 EAL: No shared files mode enabled, IPC is disabled 00:08:32.855 EAL: Heap on socket 0 was expanded by 2MB 00:08:32.855 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:33.117 EAL: Mem event callback 'spdk:(nil)' registered 00:08:33.117 00:08:33.117 00:08:33.117 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.117 http://cunit.sourceforge.net/ 00:08:33.117 00:08:33.117 00:08:33.117 Suite: components_suite 00:08:33.117 Test: vtophys_malloc_test ...passed 00:08:33.117 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 4MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 4MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 6MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 6MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 10MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 10MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 18MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 18MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 34MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 34MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 66MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 66MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 130MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 130MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.117 EAL: Restoring previous memory policy: 4 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was expanded by 258MB 00:08:33.117 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.117 EAL: request: mp_malloc_sync 00:08:33.117 EAL: No shared files mode enabled, IPC is disabled 00:08:33.117 EAL: Heap on socket 0 was shrunk by 258MB 00:08:33.117 EAL: Trying to obtain current memory policy. 00:08:33.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.378 EAL: Restoring previous memory policy: 4 00:08:33.378 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.378 EAL: request: mp_malloc_sync 00:08:33.378 EAL: No shared files mode enabled, IPC is disabled 00:08:33.378 EAL: Heap on socket 0 was expanded by 514MB 00:08:33.378 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.378 EAL: request: mp_malloc_sync 00:08:33.378 EAL: No shared files mode enabled, IPC is disabled 00:08:33.378 EAL: Heap on socket 0 was shrunk by 514MB 00:08:33.378 EAL: Trying to obtain current memory policy. 00:08:33.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.639 EAL: Restoring previous memory policy: 4 00:08:33.639 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.639 EAL: request: mp_malloc_sync 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 EAL: Heap on socket 0 was expanded by 1026MB 00:08:33.639 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.639 EAL: request: mp_malloc_sync 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:33.639 passed 00:08:33.639 00:08:33.639 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.639 suites 1 1 n/a 0 0 00:08:33.639 tests 2 2 2 0 0 00:08:33.639 asserts 497 497 497 0 n/a 00:08:33.639 00:08:33.639 Elapsed time = 0.686 seconds 00:08:33.639 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.639 EAL: request: mp_malloc_sync 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 EAL: Heap on socket 0 was shrunk by 2MB 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 EAL: No shared files mode enabled, IPC is disabled 00:08:33.639 00:08:33.639 real 0m0.836s 00:08:33.639 user 0m0.435s 00:08:33.639 sys 0m0.376s 00:08:33.639 14:02:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.639 14:02:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:33.639 ************************************ 00:08:33.639 END TEST env_vtophys 00:08:33.639 ************************************ 00:08:33.900 14:02:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:33.900 14:02:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.900 14:02:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.900 14:02:22 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.900 ************************************ 00:08:33.900 START TEST env_pci 00:08:33.900 ************************************ 00:08:33.900 14:02:22 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:33.900 00:08:33.900 00:08:33.900 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.900 http://cunit.sourceforge.net/ 00:08:33.900 00:08:33.900 00:08:33.900 Suite: pci 00:08:33.900 Test: pci_hook ...[2024-12-06 14:02:22.358743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2586112 has claimed it 00:08:33.900 EAL: Cannot find device (10000:00:01.0) 00:08:33.900 EAL: Failed to attach device on primary process 00:08:33.900 passed 00:08:33.900 00:08:33.900 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.900 suites 1 1 n/a 0 0 00:08:33.900 tests 1 1 1 0 0 00:08:33.900 asserts 25 25 25 0 n/a 00:08:33.900 00:08:33.900 Elapsed time = 0.031 seconds 00:08:33.900 00:08:33.900 real 0m0.052s 00:08:33.900 user 0m0.018s 00:08:33.900 sys 0m0.034s 00:08:33.900 14:02:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.900 14:02:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:33.900 ************************************ 00:08:33.900 END TEST env_pci 00:08:33.900 ************************************ 00:08:33.900 14:02:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:33.900 14:02:22 env -- env/env.sh@15 -- # uname 00:08:33.900 14:02:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:33.900 14:02:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:33.900 14:02:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:33.900 14:02:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.900 14:02:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.900 14:02:22 env -- common/autotest_common.sh@10 -- # set +x 00:08:33.900 ************************************ 00:08:33.900 START TEST env_dpdk_post_init 00:08:33.900 ************************************ 00:08:33.900 14:02:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:33.900 EAL: Detected CPU lcores: 128 00:08:33.900 EAL: Detected NUMA nodes: 2 00:08:33.900 EAL: Detected shared linkage of DPDK 00:08:33.900 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:33.900 EAL: Selected IOVA mode 'VA' 00:08:33.900 EAL: VFIO support initialized 00:08:34.161 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:34.161 EAL: Using IOMMU type 1 (Type 1) 00:08:34.161 EAL: Ignore mapping IO port bar(1) 00:08:34.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:34.422 EAL: Ignore mapping IO port bar(1) 00:08:34.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:34.683 EAL: Ignore mapping IO port bar(1) 00:08:34.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:34.944 EAL: Ignore mapping IO port bar(1) 00:08:34.944 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:35.205 EAL: Ignore mapping IO port bar(1) 00:08:35.205 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:35.465 EAL: Ignore mapping IO port bar(1) 00:08:35.465 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:35.465 EAL: Ignore mapping IO port bar(1) 00:08:35.731 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:35.731 EAL: Ignore mapping IO port bar(1) 00:08:36.054 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:08:36.054 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:08:36.315 EAL: Ignore mapping IO port bar(1) 00:08:36.315 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:08:36.576 EAL: Ignore mapping IO port bar(1) 00:08:36.576 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:08:36.576 EAL: Ignore mapping IO port bar(1) 00:08:36.836 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:08:36.836 EAL: Ignore mapping IO port bar(1) 00:08:37.096 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:08:37.096 EAL: Ignore mapping IO port bar(1) 00:08:37.380 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:08:37.380 EAL: Ignore mapping IO port bar(1) 00:08:37.380 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:08:37.641 EAL: Ignore mapping IO port bar(1) 00:08:37.641 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:08:37.902 EAL: Ignore mapping IO port bar(1) 00:08:37.902 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:08:37.902 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:08:37.902 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:08:38.163 Starting DPDK initialization... 00:08:38.163 Starting SPDK post initialization... 00:08:38.163 SPDK NVMe probe 00:08:38.163 Attaching to 0000:65:00.0 00:08:38.163 Attached to 0000:65:00.0 00:08:38.163 Cleaning up... 00:08:40.078 00:08:40.078 real 0m5.749s 00:08:40.078 user 0m0.108s 00:08:40.078 sys 0m0.195s 00:08:40.078 14:02:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.078 14:02:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:40.078 ************************************ 00:08:40.078 END TEST env_dpdk_post_init 00:08:40.078 ************************************ 00:08:40.078 14:02:28 env -- env/env.sh@26 -- # uname 00:08:40.078 14:02:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:40.078 14:02:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:40.078 14:02:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.078 14:02:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.078 14:02:28 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.078 ************************************ 00:08:40.078 START TEST env_mem_callbacks 00:08:40.078 ************************************ 00:08:40.078 14:02:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:40.078 EAL: Detected CPU lcores: 128 00:08:40.078 EAL: Detected NUMA nodes: 2 00:08:40.078 EAL: Detected shared linkage of DPDK 00:08:40.078 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:40.078 EAL: Selected IOVA mode 'VA' 00:08:40.078 EAL: VFIO support initialized 00:08:40.078 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:40.078 00:08:40.078 00:08:40.078 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.078 http://cunit.sourceforge.net/ 00:08:40.078 00:08:40.078 00:08:40.078 Suite: memory 00:08:40.078 Test: test ... 00:08:40.078 register 0x200000200000 2097152 00:08:40.079 malloc 3145728 00:08:40.079 register 0x200000400000 4194304 00:08:40.079 buf 0x200000500000 len 3145728 PASSED 00:08:40.079 malloc 64 00:08:40.079 buf 0x2000004fff40 len 64 PASSED 00:08:40.079 malloc 4194304 00:08:40.079 register 0x200000800000 6291456 00:08:40.079 buf 0x200000a00000 len 4194304 PASSED 00:08:40.079 free 0x200000500000 3145728 00:08:40.079 free 0x2000004fff40 64 00:08:40.079 unregister 0x200000400000 4194304 PASSED 00:08:40.079 free 0x200000a00000 4194304 00:08:40.079 unregister 0x200000800000 6291456 PASSED 00:08:40.079 malloc 8388608 00:08:40.079 register 0x200000400000 10485760 00:08:40.079 buf 0x200000600000 len 8388608 PASSED 00:08:40.079 free 0x200000600000 8388608 00:08:40.079 unregister 0x200000400000 10485760 PASSED 00:08:40.079 passed 00:08:40.079 00:08:40.079 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.079 suites 1 1 n/a 0 0 00:08:40.079 tests 1 1 1 0 0 00:08:40.079 asserts 15 15 15 0 n/a 00:08:40.079 00:08:40.079 Elapsed time = 0.010 seconds 00:08:40.079 00:08:40.079 real 0m0.071s 00:08:40.079 user 0m0.018s 00:08:40.079 sys 0m0.053s 00:08:40.079 14:02:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.079 14:02:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 ************************************ 00:08:40.079 END TEST env_mem_callbacks 00:08:40.079 ************************************ 00:08:40.079 00:08:40.079 real 0m7.541s 00:08:40.079 user 0m1.045s 00:08:40.079 sys 0m1.060s 00:08:40.079 14:02:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.079 14:02:28 env -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 ************************************ 00:08:40.079 END TEST env 00:08:40.079 ************************************ 00:08:40.079 14:02:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:40.079 14:02:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.079 14:02:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.079 14:02:28 -- common/autotest_common.sh@10 -- # set +x 00:08:40.079 ************************************ 00:08:40.079 START TEST rpc 00:08:40.079 ************************************ 00:08:40.079 14:02:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:40.079 * Looking for test storage... 00:08:40.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:40.079 14:02:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.079 14:02:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.079 14:02:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.079 14:02:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.079 14:02:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.079 14:02:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.079 14:02:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.079 14:02:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.079 14:02:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.079 14:02:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.079 14:02:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.079 14:02:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.079 14:02:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.079 14:02:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:40.079 14:02:28 rpc -- scripts/common.sh@345 -- # : 1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.079 14:02:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.079 14:02:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@353 -- # local d=1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.079 14:02:28 rpc -- scripts/common.sh@355 -- # echo 1 00:08:40.079 14:02:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.340 14:02:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:40.340 14:02:28 rpc -- scripts/common.sh@353 -- # local d=2 00:08:40.340 14:02:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.340 14:02:28 rpc -- scripts/common.sh@355 -- # echo 2 00:08:40.340 14:02:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.340 14:02:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.340 14:02:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.340 14:02:28 rpc -- scripts/common.sh@368 -- # return 0 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.340 --rc genhtml_branch_coverage=1 00:08:40.340 --rc genhtml_function_coverage=1 00:08:40.340 --rc genhtml_legend=1 00:08:40.340 --rc geninfo_all_blocks=1 00:08:40.340 --rc geninfo_unexecuted_blocks=1 00:08:40.340 00:08:40.340 ' 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.340 --rc genhtml_branch_coverage=1 00:08:40.340 --rc genhtml_function_coverage=1 00:08:40.340 --rc genhtml_legend=1 00:08:40.340 --rc geninfo_all_blocks=1 00:08:40.340 --rc geninfo_unexecuted_blocks=1 00:08:40.340 00:08:40.340 ' 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.340 --rc genhtml_branch_coverage=1 00:08:40.340 --rc genhtml_function_coverage=1 00:08:40.340 --rc genhtml_legend=1 00:08:40.340 --rc geninfo_all_blocks=1 00:08:40.340 --rc geninfo_unexecuted_blocks=1 00:08:40.340 00:08:40.340 ' 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.340 --rc genhtml_branch_coverage=1 00:08:40.340 --rc genhtml_function_coverage=1 00:08:40.340 --rc genhtml_legend=1 00:08:40.340 --rc geninfo_all_blocks=1 00:08:40.340 --rc geninfo_unexecuted_blocks=1 00:08:40.340 00:08:40.340 ' 00:08:40.340 14:02:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2587430 00:08:40.340 14:02:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:40.340 14:02:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2587430 00:08:40.340 14:02:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 2587430 ']' 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.340 14:02:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.341 14:02:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.341 14:02:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.341 [2024-12-06 14:02:28.784842] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:40.341 [2024-12-06 14:02:28.784911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587430 ] 00:08:40.341 [2024-12-06 14:02:28.874836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.341 [2024-12-06 14:02:28.927294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:40.341 [2024-12-06 14:02:28.927345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2587430' to capture a snapshot of events at runtime. 00:08:40.341 [2024-12-06 14:02:28.927354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.341 [2024-12-06 14:02:28.927361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.341 [2024-12-06 14:02:28.927368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2587430 for offline analysis/debug. 00:08:40.341 [2024-12-06 14:02:28.928138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.285 14:02:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.285 14:02:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:41.285 14:02:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:41.285 14:02:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:41.285 14:02:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:41.285 14:02:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:41.285 14:02:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.285 14:02:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.285 14:02:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 ************************************ 00:08:41.285 START TEST rpc_integrity 00:08:41.285 ************************************ 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:41.285 { 00:08:41.285 "name": "Malloc0", 00:08:41.285 "aliases": [ 00:08:41.285 "6350f465-3f50-4608-9249-2a65a9eb85fa" 00:08:41.285 ], 00:08:41.285 "product_name": "Malloc disk", 00:08:41.285 "block_size": 512, 00:08:41.285 "num_blocks": 16384, 00:08:41.285 "uuid": "6350f465-3f50-4608-9249-2a65a9eb85fa", 00:08:41.285 "assigned_rate_limits": { 00:08:41.285 "rw_ios_per_sec": 0, 00:08:41.285 "rw_mbytes_per_sec": 0, 00:08:41.285 "r_mbytes_per_sec": 0, 00:08:41.285 "w_mbytes_per_sec": 0 00:08:41.285 }, 00:08:41.285 "claimed": false, 00:08:41.285 "zoned": false, 00:08:41.285 "supported_io_types": { 00:08:41.285 "read": true, 00:08:41.285 "write": true, 00:08:41.285 "unmap": true, 00:08:41.285 "flush": true, 00:08:41.285 "reset": true, 00:08:41.285 "nvme_admin": false, 00:08:41.285 "nvme_io": false, 00:08:41.285 "nvme_io_md": false, 00:08:41.285 "write_zeroes": true, 00:08:41.285 "zcopy": true, 00:08:41.285 "get_zone_info": false, 00:08:41.285 "zone_management": false, 00:08:41.285 "zone_append": false, 00:08:41.285 "compare": false, 00:08:41.285 "compare_and_write": false, 00:08:41.285 "abort": true, 00:08:41.285 "seek_hole": false, 00:08:41.285 "seek_data": false, 00:08:41.285 "copy": true, 00:08:41.285 "nvme_iov_md": false 00:08:41.285 }, 00:08:41.285 "memory_domains": [ 00:08:41.285 { 00:08:41.285 "dma_device_id": "system", 00:08:41.285 "dma_device_type": 1 00:08:41.285 }, 00:08:41.285 { 00:08:41.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.285 "dma_device_type": 2 00:08:41.285 } 00:08:41.285 ], 00:08:41.285 "driver_specific": {} 00:08:41.285 } 00:08:41.285 ]' 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 [2024-12-06 14:02:29.775338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:41.285 [2024-12-06 14:02:29.775384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.285 [2024-12-06 14:02:29.775400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25f5b00 00:08:41.285 [2024-12-06 14:02:29.775409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.285 [2024-12-06 14:02:29.776974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.285 [2024-12-06 14:02:29.777011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:41.285 Passthru0 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.285 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.285 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:41.285 { 00:08:41.285 "name": "Malloc0", 00:08:41.285 "aliases": [ 00:08:41.285 "6350f465-3f50-4608-9249-2a65a9eb85fa" 00:08:41.285 ], 00:08:41.285 "product_name": "Malloc disk", 00:08:41.285 "block_size": 512, 00:08:41.285 "num_blocks": 16384, 00:08:41.285 "uuid": "6350f465-3f50-4608-9249-2a65a9eb85fa", 00:08:41.285 "assigned_rate_limits": { 00:08:41.285 "rw_ios_per_sec": 0, 00:08:41.285 "rw_mbytes_per_sec": 0, 00:08:41.285 "r_mbytes_per_sec": 0, 00:08:41.285 "w_mbytes_per_sec": 0 00:08:41.285 }, 00:08:41.285 "claimed": true, 00:08:41.285 "claim_type": "exclusive_write", 00:08:41.285 "zoned": false, 00:08:41.285 "supported_io_types": { 00:08:41.285 "read": true, 00:08:41.285 "write": true, 00:08:41.285 "unmap": true, 00:08:41.285 "flush": true, 00:08:41.286 "reset": true, 00:08:41.286 "nvme_admin": false, 00:08:41.286 "nvme_io": false, 00:08:41.286 "nvme_io_md": false, 00:08:41.286 "write_zeroes": true, 00:08:41.286 "zcopy": true, 00:08:41.286 "get_zone_info": false, 00:08:41.286 "zone_management": false, 00:08:41.286 "zone_append": false, 00:08:41.286 "compare": false, 00:08:41.286 "compare_and_write": false, 00:08:41.286 "abort": true, 00:08:41.286 "seek_hole": false, 00:08:41.286 "seek_data": false, 00:08:41.286 "copy": true, 00:08:41.286 "nvme_iov_md": false 00:08:41.286 }, 00:08:41.286 "memory_domains": [ 00:08:41.286 { 00:08:41.286 "dma_device_id": "system", 00:08:41.286 "dma_device_type": 1 00:08:41.286 }, 00:08:41.286 { 00:08:41.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.286 "dma_device_type": 2 00:08:41.286 } 00:08:41.286 ], 00:08:41.286 "driver_specific": {} 00:08:41.286 }, 00:08:41.286 { 00:08:41.286 "name": "Passthru0", 00:08:41.286 "aliases": [ 00:08:41.286 "3c8d9104-4bb2-5c01-95c8-fa13a53a458a" 00:08:41.286 ], 00:08:41.286 "product_name": "passthru", 00:08:41.286 "block_size": 512, 00:08:41.286 "num_blocks": 16384, 00:08:41.286 "uuid": "3c8d9104-4bb2-5c01-95c8-fa13a53a458a", 00:08:41.286 "assigned_rate_limits": { 00:08:41.286 "rw_ios_per_sec": 0, 00:08:41.286 "rw_mbytes_per_sec": 0, 00:08:41.286 "r_mbytes_per_sec": 0, 00:08:41.286 "w_mbytes_per_sec": 0 00:08:41.286 }, 00:08:41.286 "claimed": false, 00:08:41.286 "zoned": false, 00:08:41.286 "supported_io_types": { 00:08:41.286 "read": true, 00:08:41.286 "write": true, 00:08:41.286 "unmap": true, 00:08:41.286 "flush": true, 00:08:41.286 "reset": true, 00:08:41.286 "nvme_admin": false, 00:08:41.286 "nvme_io": false, 00:08:41.286 "nvme_io_md": false, 00:08:41.286 "write_zeroes": true, 00:08:41.286 "zcopy": true, 00:08:41.286 "get_zone_info": false, 00:08:41.286 "zone_management": false, 00:08:41.286 "zone_append": false, 00:08:41.286 "compare": false, 00:08:41.286 "compare_and_write": false, 00:08:41.286 "abort": true, 00:08:41.286 "seek_hole": false, 00:08:41.286 "seek_data": false, 00:08:41.286 "copy": true, 00:08:41.286 "nvme_iov_md": false 00:08:41.286 }, 00:08:41.286 "memory_domains": [ 00:08:41.286 { 00:08:41.286 "dma_device_id": "system", 00:08:41.286 "dma_device_type": 1 00:08:41.286 }, 00:08:41.286 { 00:08:41.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.286 "dma_device_type": 2 00:08:41.286 } 00:08:41.286 ], 00:08:41.286 "driver_specific": { 00:08:41.286 "passthru": { 00:08:41.286 "name": "Passthru0", 00:08:41.286 "base_bdev_name": "Malloc0" 00:08:41.286 } 00:08:41.286 } 00:08:41.286 } 00:08:41.286 ]' 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.286 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:41.286 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:41.548 14:02:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:41.548 00:08:41.548 real 0m0.302s 00:08:41.548 user 0m0.190s 00:08:41.548 sys 0m0.045s 00:08:41.548 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.548 14:02:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 ************************************ 00:08:41.548 END TEST rpc_integrity 00:08:41.548 ************************************ 00:08:41.548 14:02:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:41.548 14:02:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.548 14:02:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.548 14:02:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 ************************************ 00:08:41.548 START TEST rpc_plugins 00:08:41.548 ************************************ 00:08:41.548 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:41.548 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:41.548 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.548 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.548 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.548 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:41.548 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:41.548 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:41.549 { 00:08:41.549 "name": "Malloc1", 00:08:41.549 "aliases": [ 00:08:41.549 "66a1f39d-022c-49c6-8dd8-c0d7ef5276ff" 00:08:41.549 ], 00:08:41.549 "product_name": "Malloc disk", 00:08:41.549 "block_size": 4096, 00:08:41.549 "num_blocks": 256, 00:08:41.549 "uuid": "66a1f39d-022c-49c6-8dd8-c0d7ef5276ff", 00:08:41.549 "assigned_rate_limits": { 00:08:41.549 "rw_ios_per_sec": 0, 00:08:41.549 "rw_mbytes_per_sec": 0, 00:08:41.549 "r_mbytes_per_sec": 0, 00:08:41.549 "w_mbytes_per_sec": 0 00:08:41.549 }, 00:08:41.549 "claimed": false, 00:08:41.549 "zoned": false, 00:08:41.549 "supported_io_types": { 00:08:41.549 "read": true, 00:08:41.549 "write": true, 00:08:41.549 "unmap": true, 00:08:41.549 "flush": true, 00:08:41.549 "reset": true, 00:08:41.549 "nvme_admin": false, 00:08:41.549 "nvme_io": false, 00:08:41.549 "nvme_io_md": false, 00:08:41.549 "write_zeroes": true, 00:08:41.549 "zcopy": true, 00:08:41.549 "get_zone_info": false, 00:08:41.549 "zone_management": false, 00:08:41.549 "zone_append": false, 00:08:41.549 "compare": false, 00:08:41.549 "compare_and_write": false, 00:08:41.549 "abort": true, 00:08:41.549 "seek_hole": false, 00:08:41.549 "seek_data": false, 00:08:41.549 "copy": true, 00:08:41.549 "nvme_iov_md": false 00:08:41.549 }, 00:08:41.549 "memory_domains": [ 00:08:41.549 { 00:08:41.549 "dma_device_id": "system", 00:08:41.549 "dma_device_type": 1 00:08:41.549 }, 00:08:41.549 { 00:08:41.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.549 "dma_device_type": 2 00:08:41.549 } 00:08:41.549 ], 00:08:41.549 "driver_specific": {} 00:08:41.549 } 00:08:41.549 ]' 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:41.549 14:02:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:41.549 00:08:41.549 real 0m0.154s 00:08:41.549 user 0m0.096s 00:08:41.549 sys 0m0.022s 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.549 14:02:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.549 ************************************ 00:08:41.549 END TEST rpc_plugins 00:08:41.549 ************************************ 00:08:41.811 14:02:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:41.811 14:02:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.811 14:02:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.811 14:02:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.811 ************************************ 00:08:41.811 START TEST rpc_trace_cmd_test 00:08:41.811 ************************************ 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:41.811 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2587430", 00:08:41.811 "tpoint_group_mask": "0x8", 00:08:41.811 "iscsi_conn": { 00:08:41.811 "mask": "0x2", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "scsi": { 00:08:41.811 "mask": "0x4", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "bdev": { 00:08:41.811 "mask": "0x8", 00:08:41.811 "tpoint_mask": "0xffffffffffffffff" 00:08:41.811 }, 00:08:41.811 "nvmf_rdma": { 00:08:41.811 "mask": "0x10", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "nvmf_tcp": { 00:08:41.811 "mask": "0x20", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "ftl": { 00:08:41.811 "mask": "0x40", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "blobfs": { 00:08:41.811 "mask": "0x80", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "dsa": { 00:08:41.811 "mask": "0x200", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "thread": { 00:08:41.811 "mask": "0x400", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "nvme_pcie": { 00:08:41.811 "mask": "0x800", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "iaa": { 00:08:41.811 "mask": "0x1000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "nvme_tcp": { 00:08:41.811 "mask": "0x2000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "bdev_nvme": { 00:08:41.811 "mask": "0x4000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "sock": { 00:08:41.811 "mask": "0x8000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "blob": { 00:08:41.811 "mask": "0x10000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "bdev_raid": { 00:08:41.811 "mask": "0x20000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 }, 00:08:41.811 "scheduler": { 00:08:41.811 "mask": "0x40000", 00:08:41.811 "tpoint_mask": "0x0" 00:08:41.811 } 00:08:41.811 }' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:41.811 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:42.072 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:42.072 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:42.072 14:02:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:42.072 00:08:42.072 real 0m0.251s 00:08:42.072 user 0m0.208s 00:08:42.072 sys 0m0.034s 00:08:42.072 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.072 14:02:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.072 ************************************ 00:08:42.072 END TEST rpc_trace_cmd_test 00:08:42.072 ************************************ 00:08:42.072 14:02:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:42.072 14:02:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:42.072 14:02:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:42.072 14:02:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.072 14:02:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.072 14:02:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.072 ************************************ 00:08:42.072 START TEST rpc_daemon_integrity 00:08:42.072 ************************************ 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.072 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.073 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.073 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:42.073 { 00:08:42.073 "name": "Malloc2", 00:08:42.073 "aliases": [ 00:08:42.073 "b924f86f-c722-42a7-a9af-a480c2ee458f" 00:08:42.073 ], 00:08:42.073 "product_name": "Malloc disk", 00:08:42.073 "block_size": 512, 00:08:42.073 "num_blocks": 16384, 00:08:42.073 "uuid": "b924f86f-c722-42a7-a9af-a480c2ee458f", 00:08:42.073 "assigned_rate_limits": { 00:08:42.073 "rw_ios_per_sec": 0, 00:08:42.073 "rw_mbytes_per_sec": 0, 00:08:42.073 "r_mbytes_per_sec": 0, 00:08:42.073 "w_mbytes_per_sec": 0 00:08:42.073 }, 00:08:42.073 "claimed": false, 00:08:42.073 "zoned": false, 00:08:42.073 "supported_io_types": { 00:08:42.073 "read": true, 00:08:42.073 "write": true, 00:08:42.073 "unmap": true, 00:08:42.073 "flush": true, 00:08:42.073 "reset": true, 00:08:42.073 "nvme_admin": false, 00:08:42.073 "nvme_io": false, 00:08:42.073 "nvme_io_md": false, 00:08:42.073 "write_zeroes": true, 00:08:42.073 "zcopy": true, 00:08:42.073 "get_zone_info": false, 00:08:42.073 "zone_management": false, 00:08:42.073 "zone_append": false, 00:08:42.073 "compare": false, 00:08:42.073 "compare_and_write": false, 00:08:42.073 "abort": true, 00:08:42.073 "seek_hole": false, 00:08:42.073 "seek_data": false, 00:08:42.073 "copy": true, 00:08:42.073 "nvme_iov_md": false 00:08:42.073 }, 00:08:42.073 "memory_domains": [ 00:08:42.073 { 00:08:42.073 "dma_device_id": "system", 00:08:42.073 "dma_device_type": 1 00:08:42.073 }, 00:08:42.073 { 00:08:42.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.073 "dma_device_type": 2 00:08:42.073 } 00:08:42.073 ], 00:08:42.073 "driver_specific": {} 00:08:42.073 } 00:08:42.073 ]' 00:08:42.073 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.333 [2024-12-06 14:02:30.738085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:42.333 [2024-12-06 14:02:30.738130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.333 [2024-12-06 14:02:30.738147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25dec70 00:08:42.333 [2024-12-06 14:02:30.738155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.333 [2024-12-06 14:02:30.739751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.333 [2024-12-06 14:02:30.739785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:42.333 Passthru0 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.333 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:42.334 { 00:08:42.334 "name": "Malloc2", 00:08:42.334 "aliases": [ 00:08:42.334 "b924f86f-c722-42a7-a9af-a480c2ee458f" 00:08:42.334 ], 00:08:42.334 "product_name": "Malloc disk", 00:08:42.334 "block_size": 512, 00:08:42.334 "num_blocks": 16384, 00:08:42.334 "uuid": "b924f86f-c722-42a7-a9af-a480c2ee458f", 00:08:42.334 "assigned_rate_limits": { 00:08:42.334 "rw_ios_per_sec": 0, 00:08:42.334 "rw_mbytes_per_sec": 0, 00:08:42.334 "r_mbytes_per_sec": 0, 00:08:42.334 "w_mbytes_per_sec": 0 00:08:42.334 }, 00:08:42.334 "claimed": true, 00:08:42.334 "claim_type": "exclusive_write", 00:08:42.334 "zoned": false, 00:08:42.334 "supported_io_types": { 00:08:42.334 "read": true, 00:08:42.334 "write": true, 00:08:42.334 "unmap": true, 00:08:42.334 "flush": true, 00:08:42.334 "reset": true, 00:08:42.334 "nvme_admin": false, 00:08:42.334 "nvme_io": false, 00:08:42.334 "nvme_io_md": false, 00:08:42.334 "write_zeroes": true, 00:08:42.334 "zcopy": true, 00:08:42.334 "get_zone_info": false, 00:08:42.334 "zone_management": false, 00:08:42.334 "zone_append": false, 00:08:42.334 "compare": false, 00:08:42.334 "compare_and_write": false, 00:08:42.334 "abort": true, 00:08:42.334 "seek_hole": false, 00:08:42.334 "seek_data": false, 00:08:42.334 "copy": true, 00:08:42.334 "nvme_iov_md": false 00:08:42.334 }, 00:08:42.334 "memory_domains": [ 00:08:42.334 { 00:08:42.334 "dma_device_id": "system", 00:08:42.334 "dma_device_type": 1 00:08:42.334 }, 00:08:42.334 { 00:08:42.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.334 "dma_device_type": 2 00:08:42.334 } 00:08:42.334 ], 00:08:42.334 "driver_specific": {} 00:08:42.334 }, 00:08:42.334 { 00:08:42.334 "name": "Passthru0", 00:08:42.334 "aliases": [ 00:08:42.334 "f639985b-290e-5933-abdb-1966826e5ebc" 00:08:42.334 ], 00:08:42.334 "product_name": "passthru", 00:08:42.334 "block_size": 512, 00:08:42.334 "num_blocks": 16384, 00:08:42.334 "uuid": "f639985b-290e-5933-abdb-1966826e5ebc", 00:08:42.334 "assigned_rate_limits": { 00:08:42.334 "rw_ios_per_sec": 0, 00:08:42.334 "rw_mbytes_per_sec": 0, 00:08:42.334 "r_mbytes_per_sec": 0, 00:08:42.334 "w_mbytes_per_sec": 0 00:08:42.334 }, 00:08:42.334 "claimed": false, 00:08:42.334 "zoned": false, 00:08:42.334 "supported_io_types": { 00:08:42.334 "read": true, 00:08:42.334 "write": true, 00:08:42.334 "unmap": true, 00:08:42.334 "flush": true, 00:08:42.334 "reset": true, 00:08:42.334 "nvme_admin": false, 00:08:42.334 "nvme_io": false, 00:08:42.334 "nvme_io_md": false, 00:08:42.334 "write_zeroes": true, 00:08:42.334 "zcopy": true, 00:08:42.334 "get_zone_info": false, 00:08:42.334 "zone_management": false, 00:08:42.334 "zone_append": false, 00:08:42.334 "compare": false, 00:08:42.334 "compare_and_write": false, 00:08:42.334 "abort": true, 00:08:42.334 "seek_hole": false, 00:08:42.334 "seek_data": false, 00:08:42.334 "copy": true, 00:08:42.334 "nvme_iov_md": false 00:08:42.334 }, 00:08:42.334 "memory_domains": [ 00:08:42.334 { 00:08:42.334 "dma_device_id": "system", 00:08:42.334 "dma_device_type": 1 00:08:42.334 }, 00:08:42.334 { 00:08:42.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.334 "dma_device_type": 2 00:08:42.334 } 00:08:42.334 ], 00:08:42.334 "driver_specific": { 00:08:42.334 "passthru": { 00:08:42.334 "name": "Passthru0", 00:08:42.334 "base_bdev_name": "Malloc2" 00:08:42.334 } 00:08:42.334 } 00:08:42.334 } 00:08:42.334 ]' 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:42.334 00:08:42.334 real 0m0.311s 00:08:42.334 user 0m0.182s 00:08:42.334 sys 0m0.058s 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.334 14:02:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.334 ************************************ 00:08:42.334 END TEST rpc_daemon_integrity 00:08:42.334 ************************************ 00:08:42.334 14:02:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:42.334 14:02:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2587430 00:08:42.334 14:02:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 2587430 ']' 00:08:42.334 14:02:30 rpc -- common/autotest_common.sh@958 -- # kill -0 2587430 00:08:42.334 14:02:30 rpc -- common/autotest_common.sh@959 -- # uname 00:08:42.334 14:02:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.334 14:02:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2587430 00:08:42.594 14:02:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.594 14:02:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.594 14:02:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2587430' 00:08:42.594 killing process with pid 2587430 00:08:42.594 14:02:31 rpc -- common/autotest_common.sh@973 -- # kill 2587430 00:08:42.594 14:02:31 rpc -- common/autotest_common.sh@978 -- # wait 2587430 00:08:42.855 00:08:42.855 real 0m2.735s 00:08:42.855 user 0m3.463s 00:08:42.855 sys 0m0.874s 00:08:42.855 14:02:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.855 14:02:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.855 ************************************ 00:08:42.855 END TEST rpc 00:08:42.855 ************************************ 00:08:42.855 14:02:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:42.855 14:02:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.855 14:02:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.855 14:02:31 -- common/autotest_common.sh@10 -- # set +x 00:08:42.855 ************************************ 00:08:42.855 START TEST skip_rpc 00:08:42.855 ************************************ 00:08:42.855 14:02:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:42.855 * Looking for test storage... 00:08:42.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:42.855 14:02:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.855 14:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.855 14:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.116 14:02:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.116 --rc genhtml_branch_coverage=1 00:08:43.116 --rc genhtml_function_coverage=1 00:08:43.116 --rc genhtml_legend=1 00:08:43.116 --rc geninfo_all_blocks=1 00:08:43.116 --rc geninfo_unexecuted_blocks=1 00:08:43.116 00:08:43.116 ' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.116 --rc genhtml_branch_coverage=1 00:08:43.116 --rc genhtml_function_coverage=1 00:08:43.116 --rc genhtml_legend=1 00:08:43.116 --rc geninfo_all_blocks=1 00:08:43.116 --rc geninfo_unexecuted_blocks=1 00:08:43.116 00:08:43.116 ' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.116 --rc genhtml_branch_coverage=1 00:08:43.116 --rc genhtml_function_coverage=1 00:08:43.116 --rc genhtml_legend=1 00:08:43.116 --rc geninfo_all_blocks=1 00:08:43.116 --rc geninfo_unexecuted_blocks=1 00:08:43.116 00:08:43.116 ' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.116 --rc genhtml_branch_coverage=1 00:08:43.116 --rc genhtml_function_coverage=1 00:08:43.116 --rc genhtml_legend=1 00:08:43.116 --rc geninfo_all_blocks=1 00:08:43.116 --rc geninfo_unexecuted_blocks=1 00:08:43.116 00:08:43.116 ' 00:08:43.116 14:02:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:43.116 14:02:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:43.116 14:02:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.116 14:02:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.116 ************************************ 00:08:43.116 START TEST skip_rpc 00:08:43.116 ************************************ 00:08:43.116 14:02:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:43.116 14:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2588103 00:08:43.116 14:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:43.116 14:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:43.116 14:02:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:43.116 [2024-12-06 14:02:31.639138] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:43.116 [2024-12-06 14:02:31.639199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588103 ] 00:08:43.116 [2024-12-06 14:02:31.728360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.377 [2024-12-06 14:02:31.782816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2588103 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2588103 ']' 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2588103 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2588103 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2588103' 00:08:48.724 killing process with pid 2588103 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2588103 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2588103 00:08:48.724 00:08:48.724 real 0m5.269s 00:08:48.724 user 0m5.008s 00:08:48.724 sys 0m0.297s 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.724 14:02:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.724 ************************************ 00:08:48.724 END TEST skip_rpc 00:08:48.724 ************************************ 00:08:48.724 14:02:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:48.724 14:02:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.724 14:02:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.724 14:02:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.724 ************************************ 00:08:48.724 START TEST skip_rpc_with_json 00:08:48.724 ************************************ 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2589155 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2589155 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2589155 ']' 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.725 14:02:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:48.725 [2024-12-06 14:02:36.982844] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:48.725 [2024-12-06 14:02:36.982896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589155 ] 00:08:48.725 [2024-12-06 14:02:37.070165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.725 [2024-12-06 14:02:37.101112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:49.296 [2024-12-06 14:02:37.783731] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:49.296 request: 00:08:49.296 { 00:08:49.296 "trtype": "tcp", 00:08:49.296 "method": "nvmf_get_transports", 00:08:49.296 "req_id": 1 00:08:49.296 } 00:08:49.296 Got JSON-RPC error response 00:08:49.296 response: 00:08:49.296 { 00:08:49.296 "code": -19, 00:08:49.296 "message": "No such device" 00:08:49.296 } 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:49.296 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:49.297 [2024-12-06 14:02:37.795825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.297 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:49.558 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.558 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:49.558 { 00:08:49.558 "subsystems": [ 00:08:49.558 { 00:08:49.558 "subsystem": "fsdev", 00:08:49.558 "config": [ 00:08:49.558 { 00:08:49.558 "method": "fsdev_set_opts", 00:08:49.558 "params": { 00:08:49.558 "fsdev_io_pool_size": 65535, 00:08:49.558 "fsdev_io_cache_size": 256 00:08:49.558 } 00:08:49.558 } 00:08:49.558 ] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "vfio_user_target", 00:08:49.558 "config": null 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "keyring", 00:08:49.558 "config": [] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "iobuf", 00:08:49.558 "config": [ 00:08:49.558 { 00:08:49.558 "method": "iobuf_set_options", 00:08:49.558 "params": { 00:08:49.558 "small_pool_count": 8192, 00:08:49.558 "large_pool_count": 1024, 00:08:49.558 "small_bufsize": 8192, 00:08:49.558 "large_bufsize": 135168, 00:08:49.558 "enable_numa": false 00:08:49.558 } 00:08:49.558 } 00:08:49.558 ] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "sock", 00:08:49.558 "config": [ 00:08:49.558 { 00:08:49.558 "method": "sock_set_default_impl", 00:08:49.558 "params": { 00:08:49.558 "impl_name": "posix" 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "sock_impl_set_options", 00:08:49.558 "params": { 00:08:49.558 "impl_name": "ssl", 00:08:49.558 "recv_buf_size": 4096, 00:08:49.558 "send_buf_size": 4096, 00:08:49.558 "enable_recv_pipe": true, 00:08:49.558 "enable_quickack": false, 00:08:49.558 "enable_placement_id": 0, 00:08:49.558 "enable_zerocopy_send_server": true, 00:08:49.558 "enable_zerocopy_send_client": false, 00:08:49.558 "zerocopy_threshold": 0, 00:08:49.558 "tls_version": 0, 00:08:49.558 "enable_ktls": false 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "sock_impl_set_options", 00:08:49.558 "params": { 00:08:49.558 "impl_name": "posix", 00:08:49.558 "recv_buf_size": 2097152, 00:08:49.558 "send_buf_size": 2097152, 00:08:49.558 "enable_recv_pipe": true, 00:08:49.558 "enable_quickack": false, 00:08:49.558 "enable_placement_id": 0, 00:08:49.558 "enable_zerocopy_send_server": true, 00:08:49.558 "enable_zerocopy_send_client": false, 00:08:49.558 "zerocopy_threshold": 0, 00:08:49.558 "tls_version": 0, 00:08:49.558 "enable_ktls": false 00:08:49.558 } 00:08:49.558 } 00:08:49.558 ] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "vmd", 00:08:49.558 "config": [] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "accel", 00:08:49.558 "config": [ 00:08:49.558 { 00:08:49.558 "method": "accel_set_options", 00:08:49.558 "params": { 00:08:49.558 "small_cache_size": 128, 00:08:49.558 "large_cache_size": 16, 00:08:49.558 "task_count": 2048, 00:08:49.558 "sequence_count": 2048, 00:08:49.558 "buf_count": 2048 00:08:49.558 } 00:08:49.558 } 00:08:49.558 ] 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "subsystem": "bdev", 00:08:49.558 "config": [ 00:08:49.558 { 00:08:49.558 "method": "bdev_set_options", 00:08:49.558 "params": { 00:08:49.558 "bdev_io_pool_size": 65535, 00:08:49.558 "bdev_io_cache_size": 256, 00:08:49.558 "bdev_auto_examine": true, 00:08:49.558 "iobuf_small_cache_size": 128, 00:08:49.558 "iobuf_large_cache_size": 16 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "bdev_raid_set_options", 00:08:49.558 "params": { 00:08:49.558 "process_window_size_kb": 1024, 00:08:49.558 "process_max_bandwidth_mb_sec": 0 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "bdev_iscsi_set_options", 00:08:49.558 "params": { 00:08:49.558 "timeout_sec": 30 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "bdev_nvme_set_options", 00:08:49.558 "params": { 00:08:49.558 "action_on_timeout": "none", 00:08:49.558 "timeout_us": 0, 00:08:49.558 "timeout_admin_us": 0, 00:08:49.558 "keep_alive_timeout_ms": 10000, 00:08:49.558 "arbitration_burst": 0, 00:08:49.558 "low_priority_weight": 0, 00:08:49.558 "medium_priority_weight": 0, 00:08:49.558 "high_priority_weight": 0, 00:08:49.558 "nvme_adminq_poll_period_us": 10000, 00:08:49.558 "nvme_ioq_poll_period_us": 0, 00:08:49.558 "io_queue_requests": 0, 00:08:49.558 "delay_cmd_submit": true, 00:08:49.558 "transport_retry_count": 4, 00:08:49.558 "bdev_retry_count": 3, 00:08:49.558 "transport_ack_timeout": 0, 00:08:49.558 "ctrlr_loss_timeout_sec": 0, 00:08:49.558 "reconnect_delay_sec": 0, 00:08:49.558 "fast_io_fail_timeout_sec": 0, 00:08:49.558 "disable_auto_failback": false, 00:08:49.558 "generate_uuids": false, 00:08:49.558 "transport_tos": 0, 00:08:49.558 "nvme_error_stat": false, 00:08:49.558 "rdma_srq_size": 0, 00:08:49.558 "io_path_stat": false, 00:08:49.558 "allow_accel_sequence": false, 00:08:49.558 "rdma_max_cq_size": 0, 00:08:49.558 "rdma_cm_event_timeout_ms": 0, 00:08:49.558 "dhchap_digests": [ 00:08:49.558 "sha256", 00:08:49.558 "sha384", 00:08:49.558 "sha512" 00:08:49.558 ], 00:08:49.558 "dhchap_dhgroups": [ 00:08:49.558 "null", 00:08:49.558 "ffdhe2048", 00:08:49.558 "ffdhe3072", 00:08:49.558 "ffdhe4096", 00:08:49.558 "ffdhe6144", 00:08:49.558 "ffdhe8192" 00:08:49.558 ] 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.558 "method": "bdev_nvme_set_hotplug", 00:08:49.558 "params": { 00:08:49.558 "period_us": 100000, 00:08:49.558 "enable": false 00:08:49.558 } 00:08:49.558 }, 00:08:49.558 { 00:08:49.559 "method": "bdev_wait_for_examine" 00:08:49.559 } 00:08:49.559 ] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "scsi", 00:08:49.559 "config": null 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "scheduler", 00:08:49.559 "config": [ 00:08:49.559 { 00:08:49.559 "method": "framework_set_scheduler", 00:08:49.559 "params": { 00:08:49.559 "name": "static" 00:08:49.559 } 00:08:49.559 } 00:08:49.559 ] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "vhost_scsi", 00:08:49.559 "config": [] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "vhost_blk", 00:08:49.559 "config": [] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "ublk", 00:08:49.559 "config": [] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "nbd", 00:08:49.559 "config": [] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "nvmf", 00:08:49.559 "config": [ 00:08:49.559 { 00:08:49.559 "method": "nvmf_set_config", 00:08:49.559 "params": { 00:08:49.559 "discovery_filter": "match_any", 00:08:49.559 "admin_cmd_passthru": { 00:08:49.559 "identify_ctrlr": false 00:08:49.559 }, 00:08:49.559 "dhchap_digests": [ 00:08:49.559 "sha256", 00:08:49.559 "sha384", 00:08:49.559 "sha512" 00:08:49.559 ], 00:08:49.559 "dhchap_dhgroups": [ 00:08:49.559 "null", 00:08:49.559 "ffdhe2048", 00:08:49.559 "ffdhe3072", 00:08:49.559 "ffdhe4096", 00:08:49.559 "ffdhe6144", 00:08:49.559 "ffdhe8192" 00:08:49.559 ] 00:08:49.559 } 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "method": "nvmf_set_max_subsystems", 00:08:49.559 "params": { 00:08:49.559 "max_subsystems": 1024 00:08:49.559 } 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "method": "nvmf_set_crdt", 00:08:49.559 "params": { 00:08:49.559 "crdt1": 0, 00:08:49.559 "crdt2": 0, 00:08:49.559 "crdt3": 0 00:08:49.559 } 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "method": "nvmf_create_transport", 00:08:49.559 "params": { 00:08:49.559 "trtype": "TCP", 00:08:49.559 "max_queue_depth": 128, 00:08:49.559 "max_io_qpairs_per_ctrlr": 127, 00:08:49.559 "in_capsule_data_size": 4096, 00:08:49.559 "max_io_size": 131072, 00:08:49.559 "io_unit_size": 131072, 00:08:49.559 "max_aq_depth": 128, 00:08:49.559 "num_shared_buffers": 511, 00:08:49.559 "buf_cache_size": 4294967295, 00:08:49.559 "dif_insert_or_strip": false, 00:08:49.559 "zcopy": false, 00:08:49.559 "c2h_success": true, 00:08:49.559 "sock_priority": 0, 00:08:49.559 "abort_timeout_sec": 1, 00:08:49.559 "ack_timeout": 0, 00:08:49.559 "data_wr_pool_size": 0 00:08:49.559 } 00:08:49.559 } 00:08:49.559 ] 00:08:49.559 }, 00:08:49.559 { 00:08:49.559 "subsystem": "iscsi", 00:08:49.559 "config": [ 00:08:49.559 { 00:08:49.559 "method": "iscsi_set_options", 00:08:49.559 "params": { 00:08:49.559 "node_base": "iqn.2016-06.io.spdk", 00:08:49.559 "max_sessions": 128, 00:08:49.559 "max_connections_per_session": 2, 00:08:49.559 "max_queue_depth": 64, 00:08:49.559 "default_time2wait": 2, 00:08:49.559 "default_time2retain": 20, 00:08:49.559 "first_burst_length": 8192, 00:08:49.559 "immediate_data": true, 00:08:49.559 "allow_duplicated_isid": false, 00:08:49.559 "error_recovery_level": 0, 00:08:49.559 "nop_timeout": 60, 00:08:49.559 "nop_in_interval": 30, 00:08:49.559 "disable_chap": false, 00:08:49.559 "require_chap": false, 00:08:49.559 "mutual_chap": false, 00:08:49.559 "chap_group": 0, 00:08:49.559 "max_large_datain_per_connection": 64, 00:08:49.559 "max_r2t_per_connection": 4, 00:08:49.559 "pdu_pool_size": 36864, 00:08:49.559 "immediate_data_pool_size": 16384, 00:08:49.559 "data_out_pool_size": 2048 00:08:49.559 } 00:08:49.559 } 00:08:49.559 ] 00:08:49.559 } 00:08:49.559 ] 00:08:49.559 } 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2589155 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2589155 ']' 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2589155 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.559 14:02:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589155 00:08:49.559 14:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.559 14:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.559 14:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589155' 00:08:49.559 killing process with pid 2589155 00:08:49.559 14:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2589155 00:08:49.559 14:02:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2589155 00:08:49.820 14:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2589487 00:08:49.820 14:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:49.820 14:02:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2589487 ']' 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589487' 00:08:55.110 killing process with pid 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2589487 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:55.110 00:08:55.110 real 0m6.563s 00:08:55.110 user 0m6.469s 00:08:55.110 sys 0m0.570s 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:55.110 ************************************ 00:08:55.110 END TEST skip_rpc_with_json 00:08:55.110 ************************************ 00:08:55.110 14:02:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.110 ************************************ 00:08:55.110 START TEST skip_rpc_with_delay 00:08:55.110 ************************************ 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:55.110 [2024-12-06 14:02:43.626919] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.110 00:08:55.110 real 0m0.080s 00:08:55.110 user 0m0.041s 00:08:55.110 sys 0m0.038s 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.110 14:02:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:55.110 ************************************ 00:08:55.110 END TEST skip_rpc_with_delay 00:08:55.110 ************************************ 00:08:55.110 14:02:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:55.110 14:02:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:55.110 14:02:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.110 14:02:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.110 ************************************ 00:08:55.110 START TEST exit_on_failed_rpc_init 00:08:55.110 ************************************ 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2590597 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2590597 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2590597 ']' 00:08:55.110 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.111 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.111 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.111 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.111 14:02:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 [2024-12-06 14:02:43.790277] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:55.372 [2024-12-06 14:02:43.790343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590597 ] 00:08:55.372 [2024-12-06 14:02:43.875899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.372 [2024-12-06 14:02:43.911319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:56.004 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:56.004 [2024-12-06 14:02:44.639362] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:56.004 [2024-12-06 14:02:44.639412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590881 ] 00:08:56.264 [2024-12-06 14:02:44.726937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.264 [2024-12-06 14:02:44.762625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.264 [2024-12-06 14:02:44.762675] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:56.264 [2024-12-06 14:02:44.762685] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:56.264 [2024-12-06 14:02:44.762691] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2590597 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2590597 ']' 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2590597 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590597 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590597' 00:08:56.264 killing process with pid 2590597 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2590597 00:08:56.264 14:02:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2590597 00:08:56.524 00:08:56.524 real 0m1.323s 00:08:56.524 user 0m1.512s 00:08:56.524 sys 0m0.416s 00:08:56.524 14:02:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.524 14:02:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 ************************************ 00:08:56.524 END TEST exit_on_failed_rpc_init 00:08:56.524 ************************************ 00:08:56.524 14:02:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:56.524 00:08:56.524 real 0m13.760s 00:08:56.524 user 0m13.256s 00:08:56.524 sys 0m1.653s 00:08:56.524 14:02:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.524 14:02:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.524 ************************************ 00:08:56.524 END TEST skip_rpc 00:08:56.524 ************************************ 00:08:56.524 14:02:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:56.524 14:02:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.524 14:02:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.524 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 ************************************ 00:08:56.783 START TEST rpc_client 00:08:56.783 ************************************ 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:56.783 * Looking for test storage... 00:08:56.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.783 14:02:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.783 14:02:45 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.783 --rc genhtml_branch_coverage=1 00:08:56.783 --rc genhtml_function_coverage=1 00:08:56.783 --rc genhtml_legend=1 00:08:56.783 --rc geninfo_all_blocks=1 00:08:56.783 --rc geninfo_unexecuted_blocks=1 00:08:56.783 00:08:56.783 ' 00:08:56.784 14:02:45 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.784 --rc genhtml_branch_coverage=1 00:08:56.784 --rc genhtml_function_coverage=1 00:08:56.784 --rc genhtml_legend=1 00:08:56.784 --rc geninfo_all_blocks=1 00:08:56.784 --rc geninfo_unexecuted_blocks=1 00:08:56.784 00:08:56.784 ' 00:08:56.784 14:02:45 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.784 --rc genhtml_branch_coverage=1 00:08:56.784 --rc genhtml_function_coverage=1 00:08:56.784 --rc genhtml_legend=1 00:08:56.784 --rc geninfo_all_blocks=1 00:08:56.784 --rc geninfo_unexecuted_blocks=1 00:08:56.784 00:08:56.784 ' 00:08:56.784 14:02:45 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.784 --rc genhtml_branch_coverage=1 00:08:56.784 --rc genhtml_function_coverage=1 00:08:56.784 --rc genhtml_legend=1 00:08:56.784 --rc geninfo_all_blocks=1 00:08:56.784 --rc geninfo_unexecuted_blocks=1 00:08:56.784 00:08:56.784 ' 00:08:56.784 14:02:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:56.784 OK 00:08:56.784 14:02:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:56.784 00:08:56.784 real 0m0.220s 00:08:56.784 user 0m0.135s 00:08:56.784 sys 0m0.097s 00:08:56.784 14:02:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.784 14:02:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:56.784 ************************************ 00:08:56.784 END TEST rpc_client 00:08:56.784 ************************************ 00:08:57.043 14:02:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:57.043 14:02:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.043 14:02:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.043 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.043 ************************************ 00:08:57.043 START TEST json_config 00:08:57.043 ************************************ 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.043 14:02:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.043 14:02:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.043 14:02:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.043 14:02:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.043 14:02:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.043 14:02:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:57.043 14:02:45 json_config -- scripts/common.sh@345 -- # : 1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.043 14:02:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.043 14:02:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@353 -- # local d=1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.043 14:02:45 json_config -- scripts/common.sh@355 -- # echo 1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.043 14:02:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@353 -- # local d=2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.043 14:02:45 json_config -- scripts/common.sh@355 -- # echo 2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.043 14:02:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.043 14:02:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.043 14:02:45 json_config -- scripts/common.sh@368 -- # return 0 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.043 --rc genhtml_branch_coverage=1 00:08:57.043 --rc genhtml_function_coverage=1 00:08:57.043 --rc genhtml_legend=1 00:08:57.043 --rc geninfo_all_blocks=1 00:08:57.043 --rc geninfo_unexecuted_blocks=1 00:08:57.043 00:08:57.043 ' 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.043 --rc genhtml_branch_coverage=1 00:08:57.043 --rc genhtml_function_coverage=1 00:08:57.043 --rc genhtml_legend=1 00:08:57.043 --rc geninfo_all_blocks=1 00:08:57.043 --rc geninfo_unexecuted_blocks=1 00:08:57.043 00:08:57.043 ' 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.043 --rc genhtml_branch_coverage=1 00:08:57.043 --rc genhtml_function_coverage=1 00:08:57.043 --rc genhtml_legend=1 00:08:57.043 --rc geninfo_all_blocks=1 00:08:57.043 --rc geninfo_unexecuted_blocks=1 00:08:57.043 00:08:57.043 ' 00:08:57.043 14:02:45 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.043 --rc genhtml_branch_coverage=1 00:08:57.043 --rc genhtml_function_coverage=1 00:08:57.043 --rc genhtml_legend=1 00:08:57.043 --rc geninfo_all_blocks=1 00:08:57.043 --rc geninfo_unexecuted_blocks=1 00:08:57.043 00:08:57.043 ' 00:08:57.043 14:02:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.043 14:02:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.043 14:02:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.043 14:02:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.043 14:02:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.043 14:02:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.043 14:02:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.043 14:02:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.043 14:02:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.043 14:02:45 json_config -- paths/export.sh@5 -- # export PATH 00:08:57.044 14:02:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.044 14:02:45 json_config -- nvmf/common.sh@51 -- # : 0 00:08:57.044 14:02:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.044 14:02:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.303 14:02:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:57.303 INFO: JSON configuration test init 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:57.303 14:02:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.303 14:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:57.303 14:02:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.303 14:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.303 14:02:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:57.303 14:02:45 json_config -- json_config/common.sh@9 -- # local app=target 00:08:57.303 14:02:45 json_config -- json_config/common.sh@10 -- # shift 00:08:57.303 14:02:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:57.304 14:02:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:57.304 14:02:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:57.304 14:02:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:57.304 14:02:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:57.304 14:02:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2591189 00:08:57.304 14:02:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:57.304 Waiting for target to run... 00:08:57.304 14:02:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2591189 /var/tmp/spdk_tgt.sock 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 2591189 ']' 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.304 14:02:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:57.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.304 14:02:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:57.304 [2024-12-06 14:02:45.770065] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:08:57.304 [2024-12-06 14:02:45.770140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591189 ] 00:08:57.563 [2024-12-06 14:02:46.153268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.563 [2024-12-06 14:02:46.178302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:08:58.131 14:02:46 json_config -- json_config/common.sh@26 -- # echo '' 00:08:58.131 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.131 14:02:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:58.131 14:02:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:58.131 14:02:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:58.701 14:02:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.701 14:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:58.701 14:02:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:58.701 14:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@54 -- # sort 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:58.961 14:02:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.961 14:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:58.961 14:02:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.961 14:02:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:58.961 14:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:58.961 MallocForNvmf0 00:08:58.961 14:02:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:58.961 14:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:59.221 MallocForNvmf1 00:08:59.221 14:02:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:59.221 14:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:59.481 [2024-12-06 14:02:47.903090] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.481 14:02:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.481 14:02:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.481 14:02:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:59.481 14:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:59.741 14:02:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:59.741 14:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:00.001 14:02:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:00.001 14:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:00.001 [2024-12-06 14:02:48.577152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:00.001 14:02:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:00.001 14:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.001 14:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:00.001 14:02:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:00.001 14:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.001 14:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:00.261 14:02:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:00.261 14:02:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:00.261 14:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:00.261 MallocBdevForConfigChangeCheck 00:09:00.261 14:02:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:00.261 14:02:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.261 14:02:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:00.261 14:02:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:00.261 14:02:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:00.833 14:02:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:00.833 INFO: shutting down applications... 00:09:00.833 14:02:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:00.833 14:02:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:00.833 14:02:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:00.833 14:02:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:01.092 Calling clear_iscsi_subsystem 00:09:01.092 Calling clear_nvmf_subsystem 00:09:01.092 Calling clear_nbd_subsystem 00:09:01.092 Calling clear_ublk_subsystem 00:09:01.092 Calling clear_vhost_blk_subsystem 00:09:01.092 Calling clear_vhost_scsi_subsystem 00:09:01.092 Calling clear_bdev_subsystem 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:01.092 14:02:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:01.352 14:02:49 json_config -- json_config/json_config.sh@352 -- # break 00:09:01.352 14:02:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:01.352 14:02:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:01.352 14:02:49 json_config -- json_config/common.sh@31 -- # local app=target 00:09:01.352 14:02:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:01.352 14:02:49 json_config -- json_config/common.sh@35 -- # [[ -n 2591189 ]] 00:09:01.352 14:02:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2591189 00:09:01.352 14:02:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:01.352 14:02:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.352 14:02:49 json_config -- json_config/common.sh@41 -- # kill -0 2591189 00:09:01.352 14:02:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:01.923 14:02:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:01.923 14:02:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:01.923 14:02:50 json_config -- json_config/common.sh@41 -- # kill -0 2591189 00:09:01.923 14:02:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:01.923 14:02:50 json_config -- json_config/common.sh@43 -- # break 00:09:01.923 14:02:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:01.923 14:02:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:01.923 SPDK target shutdown done 00:09:01.923 14:02:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:01.923 INFO: relaunching applications... 00:09:01.923 14:02:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:01.923 14:02:50 json_config -- json_config/common.sh@9 -- # local app=target 00:09:01.923 14:02:50 json_config -- json_config/common.sh@10 -- # shift 00:09:01.923 14:02:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:01.923 14:02:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:01.923 14:02:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:01.923 14:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.923 14:02:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:01.923 14:02:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2592184 00:09:01.923 14:02:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:01.923 Waiting for target to run... 00:09:01.923 14:02:50 json_config -- json_config/common.sh@25 -- # waitforlisten 2592184 /var/tmp/spdk_tgt.sock 00:09:01.923 14:02:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 2592184 ']' 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:01.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.923 14:02:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.184 [2024-12-06 14:02:50.563168] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:02.184 [2024-12-06 14:02:50.563248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592184 ] 00:09:02.444 [2024-12-06 14:02:50.863203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.444 [2024-12-06 14:02:50.888448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.014 [2024-12-06 14:02:51.388832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.014 [2024-12-06 14:02:51.421179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:03.014 14:02:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.014 14:02:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:03.014 14:02:51 json_config -- json_config/common.sh@26 -- # echo '' 00:09:03.014 00:09:03.014 14:02:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:03.014 14:02:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:03.014 INFO: Checking if target configuration is the same... 00:09:03.014 14:02:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.014 14:02:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:03.014 14:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.014 + '[' 2 -ne 2 ']' 00:09:03.014 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.014 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.014 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.014 +++ basename /dev/fd/62 00:09:03.014 ++ mktemp /tmp/62.XXX 00:09:03.014 + tmp_file_1=/tmp/62.Ufn 00:09:03.014 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.014 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.014 + tmp_file_2=/tmp/spdk_tgt_config.json.kQO 00:09:03.014 + ret=0 00:09:03.014 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.275 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.275 + diff -u /tmp/62.Ufn /tmp/spdk_tgt_config.json.kQO 00:09:03.275 + echo 'INFO: JSON config files are the same' 00:09:03.275 INFO: JSON config files are the same 00:09:03.275 + rm /tmp/62.Ufn /tmp/spdk_tgt_config.json.kQO 00:09:03.275 + exit 0 00:09:03.275 14:02:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:03.275 14:02:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:03.275 INFO: changing configuration and checking if this can be detected... 00:09:03.275 14:02:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.275 14:02:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:03.535 14:02:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.535 14:02:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:03.535 14:02:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:03.535 + '[' 2 -ne 2 ']' 00:09:03.535 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:03.535 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:03.535 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.535 +++ basename /dev/fd/62 00:09:03.535 ++ mktemp /tmp/62.XXX 00:09:03.535 + tmp_file_1=/tmp/62.fbC 00:09:03.535 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:03.535 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:03.535 + tmp_file_2=/tmp/spdk_tgt_config.json.kvI 00:09:03.535 + ret=0 00:09:03.535 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.796 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:03.796 + diff -u /tmp/62.fbC /tmp/spdk_tgt_config.json.kvI 00:09:03.796 + ret=1 00:09:03.796 + echo '=== Start of file: /tmp/62.fbC ===' 00:09:03.796 + cat /tmp/62.fbC 00:09:03.796 + echo '=== End of file: /tmp/62.fbC ===' 00:09:03.796 + echo '' 00:09:03.796 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kvI ===' 00:09:03.796 + cat /tmp/spdk_tgt_config.json.kvI 00:09:03.796 + echo '=== End of file: /tmp/spdk_tgt_config.json.kvI ===' 00:09:03.796 + echo '' 00:09:03.796 + rm /tmp/62.fbC /tmp/spdk_tgt_config.json.kvI 00:09:03.796 + exit 1 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:03.796 INFO: configuration change detected. 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 2592184 ]] 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:03.796 14:02:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.796 14:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.057 14:02:52 json_config -- json_config/json_config.sh@330 -- # killprocess 2592184 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 2592184 ']' 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@958 -- # kill -0 2592184 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@959 -- # uname 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592184 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592184' 00:09:04.057 killing process with pid 2592184 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@973 -- # kill 2592184 00:09:04.057 14:02:52 json_config -- common/autotest_common.sh@978 -- # wait 2592184 00:09:04.317 14:02:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:04.317 14:02:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:04.317 14:02:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.317 14:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.317 14:02:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:04.317 14:02:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:04.317 INFO: Success 00:09:04.317 00:09:04.317 real 0m7.360s 00:09:04.317 user 0m8.738s 00:09:04.317 sys 0m2.086s 00:09:04.317 14:02:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.317 14:02:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:04.317 ************************************ 00:09:04.317 END TEST json_config 00:09:04.317 ************************************ 00:09:04.317 14:02:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.317 14:02:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.317 14:02:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.317 14:02:52 -- common/autotest_common.sh@10 -- # set +x 00:09:04.317 ************************************ 00:09:04.317 START TEST json_config_extra_key 00:09:04.317 ************************************ 00:09:04.317 14:02:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:04.597 14:02:52 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.597 14:02:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.597 14:02:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.597 14:02:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:04.597 14:02:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:04.598 14:02:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.598 14:02:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.598 --rc genhtml_branch_coverage=1 00:09:04.598 --rc genhtml_function_coverage=1 00:09:04.598 --rc genhtml_legend=1 00:09:04.598 --rc geninfo_all_blocks=1 00:09:04.598 --rc geninfo_unexecuted_blocks=1 00:09:04.598 00:09:04.598 ' 00:09:04.598 14:02:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.598 --rc genhtml_branch_coverage=1 00:09:04.598 --rc genhtml_function_coverage=1 00:09:04.598 --rc genhtml_legend=1 00:09:04.598 --rc geninfo_all_blocks=1 00:09:04.598 --rc geninfo_unexecuted_blocks=1 00:09:04.598 00:09:04.598 ' 00:09:04.598 14:02:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.598 --rc genhtml_branch_coverage=1 00:09:04.598 --rc genhtml_function_coverage=1 00:09:04.598 --rc genhtml_legend=1 00:09:04.598 --rc geninfo_all_blocks=1 00:09:04.598 --rc geninfo_unexecuted_blocks=1 00:09:04.598 00:09:04.598 ' 00:09:04.598 14:02:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.598 --rc genhtml_branch_coverage=1 00:09:04.598 --rc genhtml_function_coverage=1 00:09:04.598 --rc genhtml_legend=1 00:09:04.598 --rc geninfo_all_blocks=1 00:09:04.598 --rc geninfo_unexecuted_blocks=1 00:09:04.598 00:09:04.598 ' 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.598 14:02:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.598 14:02:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.598 14:02:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.598 14:02:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.598 14:02:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:04.598 14:02:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.598 14:02:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:04.598 INFO: launching applications... 00:09:04.598 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.598 14:02:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:04.598 14:02:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:04.598 14:02:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2592940 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:04.599 Waiting for target to run... 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2592940 /var/tmp/spdk_tgt.sock 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2592940 ']' 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:04.599 14:02:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:04.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.599 14:02:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:04.599 [2024-12-06 14:02:53.185278] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:04.599 [2024-12-06 14:02:53.185354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592940 ] 00:09:04.859 [2024-12-06 14:02:53.488433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.119 [2024-12-06 14:02:53.517009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.379 14:02:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.379 14:02:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:05.379 00:09:05.379 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:05.379 INFO: shutting down applications... 00:09:05.379 14:02:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2592940 ]] 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2592940 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2592940 00:09:05.379 14:02:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2592940 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:05.949 14:02:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:05.949 SPDK target shutdown done 00:09:05.949 14:02:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:05.949 Success 00:09:05.949 00:09:05.949 real 0m1.565s 00:09:05.949 user 0m1.142s 00:09:05.949 sys 0m0.445s 00:09:05.949 14:02:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.949 14:02:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 ************************************ 00:09:05.949 END TEST json_config_extra_key 00:09:05.949 ************************************ 00:09:05.949 14:02:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:05.949 14:02:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.949 14:02:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.949 14:02:54 -- common/autotest_common.sh@10 -- # set +x 00:09:05.950 ************************************ 00:09:05.950 START TEST alias_rpc 00:09:05.950 ************************************ 00:09:05.950 14:02:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:06.211 * Looking for test storage... 00:09:06.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.211 14:02:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.211 --rc genhtml_branch_coverage=1 00:09:06.211 --rc genhtml_function_coverage=1 00:09:06.211 --rc genhtml_legend=1 00:09:06.211 --rc geninfo_all_blocks=1 00:09:06.211 --rc geninfo_unexecuted_blocks=1 00:09:06.211 00:09:06.211 ' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.211 --rc genhtml_branch_coverage=1 00:09:06.211 --rc genhtml_function_coverage=1 00:09:06.211 --rc genhtml_legend=1 00:09:06.211 --rc geninfo_all_blocks=1 00:09:06.211 --rc geninfo_unexecuted_blocks=1 00:09:06.211 00:09:06.211 ' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.211 --rc genhtml_branch_coverage=1 00:09:06.211 --rc genhtml_function_coverage=1 00:09:06.211 --rc genhtml_legend=1 00:09:06.211 --rc geninfo_all_blocks=1 00:09:06.211 --rc geninfo_unexecuted_blocks=1 00:09:06.211 00:09:06.211 ' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.211 --rc genhtml_branch_coverage=1 00:09:06.211 --rc genhtml_function_coverage=1 00:09:06.211 --rc genhtml_legend=1 00:09:06.211 --rc geninfo_all_blocks=1 00:09:06.211 --rc geninfo_unexecuted_blocks=1 00:09:06.211 00:09:06.211 ' 00:09:06.211 14:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.211 14:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2593341 00:09:06.211 14:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2593341 00:09:06.211 14:02:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2593341 ']' 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.211 14:02:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.211 [2024-12-06 14:02:54.832130] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:06.211 [2024-12-06 14:02:54.832199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593341 ] 00:09:06.472 [2024-12-06 14:02:54.920533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.472 [2024-12-06 14:02:54.955269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.041 14:02:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.041 14:02:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:07.041 14:02:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:07.301 14:02:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2593341 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2593341 ']' 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2593341 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593341 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593341' 00:09:07.301 killing process with pid 2593341 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 2593341 00:09:07.301 14:02:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 2593341 00:09:07.562 00:09:07.562 real 0m1.482s 00:09:07.562 user 0m1.617s 00:09:07.562 sys 0m0.414s 00:09:07.562 14:02:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.562 14:02:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.562 ************************************ 00:09:07.562 END TEST alias_rpc 00:09:07.562 ************************************ 00:09:07.562 14:02:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:07.562 14:02:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:07.562 14:02:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.562 14:02:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.562 14:02:56 -- common/autotest_common.sh@10 -- # set +x 00:09:07.562 ************************************ 00:09:07.563 START TEST spdkcli_tcp 00:09:07.563 ************************************ 00:09:07.563 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:07.822 * Looking for test storage... 00:09:07.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.822 14:02:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.822 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.822 --rc genhtml_branch_coverage=1 00:09:07.822 --rc genhtml_function_coverage=1 00:09:07.822 --rc genhtml_legend=1 00:09:07.822 --rc geninfo_all_blocks=1 00:09:07.822 --rc geninfo_unexecuted_blocks=1 00:09:07.822 00:09:07.822 ' 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.823 --rc genhtml_branch_coverage=1 00:09:07.823 --rc genhtml_function_coverage=1 00:09:07.823 --rc genhtml_legend=1 00:09:07.823 --rc geninfo_all_blocks=1 00:09:07.823 --rc geninfo_unexecuted_blocks=1 00:09:07.823 00:09:07.823 ' 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.823 --rc genhtml_branch_coverage=1 00:09:07.823 --rc genhtml_function_coverage=1 00:09:07.823 --rc genhtml_legend=1 00:09:07.823 --rc geninfo_all_blocks=1 00:09:07.823 --rc geninfo_unexecuted_blocks=1 00:09:07.823 00:09:07.823 ' 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.823 --rc genhtml_branch_coverage=1 00:09:07.823 --rc genhtml_function_coverage=1 00:09:07.823 --rc genhtml_legend=1 00:09:07.823 --rc geninfo_all_blocks=1 00:09:07.823 --rc geninfo_unexecuted_blocks=1 00:09:07.823 00:09:07.823 ' 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2593740 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2593740 00:09:07.823 14:02:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2593740 ']' 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.823 14:02:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.823 [2024-12-06 14:02:56.385092] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:07.823 [2024-12-06 14:02:56.385143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593740 ] 00:09:08.082 [2024-12-06 14:02:56.470125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.082 [2024-12-06 14:02:56.503249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.082 [2024-12-06 14:02:56.503249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.652 14:02:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.652 14:02:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:08.652 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:08.652 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2593764 00:09:08.652 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:08.914 [ 00:09:08.914 "bdev_malloc_delete", 00:09:08.914 "bdev_malloc_create", 00:09:08.914 "bdev_null_resize", 00:09:08.914 "bdev_null_delete", 00:09:08.914 "bdev_null_create", 00:09:08.914 "bdev_nvme_cuse_unregister", 00:09:08.914 "bdev_nvme_cuse_register", 00:09:08.914 "bdev_opal_new_user", 00:09:08.914 "bdev_opal_set_lock_state", 00:09:08.914 "bdev_opal_delete", 00:09:08.914 "bdev_opal_get_info", 00:09:08.914 "bdev_opal_create", 00:09:08.914 "bdev_nvme_opal_revert", 00:09:08.914 "bdev_nvme_opal_init", 00:09:08.914 "bdev_nvme_send_cmd", 00:09:08.914 "bdev_nvme_set_keys", 00:09:08.914 "bdev_nvme_get_path_iostat", 00:09:08.914 "bdev_nvme_get_mdns_discovery_info", 00:09:08.914 "bdev_nvme_stop_mdns_discovery", 00:09:08.914 "bdev_nvme_start_mdns_discovery", 00:09:08.914 "bdev_nvme_set_multipath_policy", 00:09:08.914 "bdev_nvme_set_preferred_path", 00:09:08.914 "bdev_nvme_get_io_paths", 00:09:08.914 "bdev_nvme_remove_error_injection", 00:09:08.914 "bdev_nvme_add_error_injection", 00:09:08.914 "bdev_nvme_get_discovery_info", 00:09:08.914 "bdev_nvme_stop_discovery", 00:09:08.914 "bdev_nvme_start_discovery", 00:09:08.914 "bdev_nvme_get_controller_health_info", 00:09:08.914 "bdev_nvme_disable_controller", 00:09:08.914 "bdev_nvme_enable_controller", 00:09:08.914 "bdev_nvme_reset_controller", 00:09:08.914 "bdev_nvme_get_transport_statistics", 00:09:08.914 "bdev_nvme_apply_firmware", 00:09:08.914 "bdev_nvme_detach_controller", 00:09:08.914 "bdev_nvme_get_controllers", 00:09:08.914 "bdev_nvme_attach_controller", 00:09:08.914 "bdev_nvme_set_hotplug", 00:09:08.914 "bdev_nvme_set_options", 00:09:08.914 "bdev_passthru_delete", 00:09:08.914 "bdev_passthru_create", 00:09:08.914 "bdev_lvol_set_parent_bdev", 00:09:08.914 "bdev_lvol_set_parent", 00:09:08.914 "bdev_lvol_check_shallow_copy", 00:09:08.914 "bdev_lvol_start_shallow_copy", 00:09:08.914 "bdev_lvol_grow_lvstore", 00:09:08.914 "bdev_lvol_get_lvols", 00:09:08.915 "bdev_lvol_get_lvstores", 00:09:08.915 "bdev_lvol_delete", 00:09:08.915 "bdev_lvol_set_read_only", 00:09:08.915 "bdev_lvol_resize", 00:09:08.915 "bdev_lvol_decouple_parent", 00:09:08.915 "bdev_lvol_inflate", 00:09:08.915 "bdev_lvol_rename", 00:09:08.915 "bdev_lvol_clone_bdev", 00:09:08.915 "bdev_lvol_clone", 00:09:08.915 "bdev_lvol_snapshot", 00:09:08.915 "bdev_lvol_create", 00:09:08.915 "bdev_lvol_delete_lvstore", 00:09:08.915 "bdev_lvol_rename_lvstore", 00:09:08.915 "bdev_lvol_create_lvstore", 00:09:08.915 "bdev_raid_set_options", 00:09:08.915 "bdev_raid_remove_base_bdev", 00:09:08.915 "bdev_raid_add_base_bdev", 00:09:08.915 "bdev_raid_delete", 00:09:08.915 "bdev_raid_create", 00:09:08.915 "bdev_raid_get_bdevs", 00:09:08.915 "bdev_error_inject_error", 00:09:08.915 "bdev_error_delete", 00:09:08.915 "bdev_error_create", 00:09:08.915 "bdev_split_delete", 00:09:08.915 "bdev_split_create", 00:09:08.915 "bdev_delay_delete", 00:09:08.915 "bdev_delay_create", 00:09:08.915 "bdev_delay_update_latency", 00:09:08.915 "bdev_zone_block_delete", 00:09:08.915 "bdev_zone_block_create", 00:09:08.915 "blobfs_create", 00:09:08.915 "blobfs_detect", 00:09:08.915 "blobfs_set_cache_size", 00:09:08.915 "bdev_aio_delete", 00:09:08.915 "bdev_aio_rescan", 00:09:08.915 "bdev_aio_create", 00:09:08.915 "bdev_ftl_set_property", 00:09:08.915 "bdev_ftl_get_properties", 00:09:08.915 "bdev_ftl_get_stats", 00:09:08.915 "bdev_ftl_unmap", 00:09:08.915 "bdev_ftl_unload", 00:09:08.915 "bdev_ftl_delete", 00:09:08.915 "bdev_ftl_load", 00:09:08.915 "bdev_ftl_create", 00:09:08.915 "bdev_virtio_attach_controller", 00:09:08.915 "bdev_virtio_scsi_get_devices", 00:09:08.915 "bdev_virtio_detach_controller", 00:09:08.915 "bdev_virtio_blk_set_hotplug", 00:09:08.915 "bdev_iscsi_delete", 00:09:08.915 "bdev_iscsi_create", 00:09:08.915 "bdev_iscsi_set_options", 00:09:08.915 "accel_error_inject_error", 00:09:08.915 "ioat_scan_accel_module", 00:09:08.915 "dsa_scan_accel_module", 00:09:08.915 "iaa_scan_accel_module", 00:09:08.915 "vfu_virtio_create_fs_endpoint", 00:09:08.915 "vfu_virtio_create_scsi_endpoint", 00:09:08.915 "vfu_virtio_scsi_remove_target", 00:09:08.915 "vfu_virtio_scsi_add_target", 00:09:08.915 "vfu_virtio_create_blk_endpoint", 00:09:08.915 "vfu_virtio_delete_endpoint", 00:09:08.915 "keyring_file_remove_key", 00:09:08.915 "keyring_file_add_key", 00:09:08.915 "keyring_linux_set_options", 00:09:08.915 "fsdev_aio_delete", 00:09:08.915 "fsdev_aio_create", 00:09:08.915 "iscsi_get_histogram", 00:09:08.915 "iscsi_enable_histogram", 00:09:08.915 "iscsi_set_options", 00:09:08.915 "iscsi_get_auth_groups", 00:09:08.915 "iscsi_auth_group_remove_secret", 00:09:08.915 "iscsi_auth_group_add_secret", 00:09:08.915 "iscsi_delete_auth_group", 00:09:08.915 "iscsi_create_auth_group", 00:09:08.915 "iscsi_set_discovery_auth", 00:09:08.915 "iscsi_get_options", 00:09:08.915 "iscsi_target_node_request_logout", 00:09:08.915 "iscsi_target_node_set_redirect", 00:09:08.915 "iscsi_target_node_set_auth", 00:09:08.915 "iscsi_target_node_add_lun", 00:09:08.915 "iscsi_get_stats", 00:09:08.915 "iscsi_get_connections", 00:09:08.915 "iscsi_portal_group_set_auth", 00:09:08.915 "iscsi_start_portal_group", 00:09:08.915 "iscsi_delete_portal_group", 00:09:08.915 "iscsi_create_portal_group", 00:09:08.915 "iscsi_get_portal_groups", 00:09:08.915 "iscsi_delete_target_node", 00:09:08.915 "iscsi_target_node_remove_pg_ig_maps", 00:09:08.915 "iscsi_target_node_add_pg_ig_maps", 00:09:08.915 "iscsi_create_target_node", 00:09:08.915 "iscsi_get_target_nodes", 00:09:08.915 "iscsi_delete_initiator_group", 00:09:08.915 "iscsi_initiator_group_remove_initiators", 00:09:08.915 "iscsi_initiator_group_add_initiators", 00:09:08.915 "iscsi_create_initiator_group", 00:09:08.915 "iscsi_get_initiator_groups", 00:09:08.915 "nvmf_set_crdt", 00:09:08.915 "nvmf_set_config", 00:09:08.915 "nvmf_set_max_subsystems", 00:09:08.915 "nvmf_stop_mdns_prr", 00:09:08.915 "nvmf_publish_mdns_prr", 00:09:08.915 "nvmf_subsystem_get_listeners", 00:09:08.915 "nvmf_subsystem_get_qpairs", 00:09:08.915 "nvmf_subsystem_get_controllers", 00:09:08.915 "nvmf_get_stats", 00:09:08.915 "nvmf_get_transports", 00:09:08.915 "nvmf_create_transport", 00:09:08.915 "nvmf_get_targets", 00:09:08.915 "nvmf_delete_target", 00:09:08.915 "nvmf_create_target", 00:09:08.915 "nvmf_subsystem_allow_any_host", 00:09:08.915 "nvmf_subsystem_set_keys", 00:09:08.915 "nvmf_subsystem_remove_host", 00:09:08.915 "nvmf_subsystem_add_host", 00:09:08.915 "nvmf_ns_remove_host", 00:09:08.915 "nvmf_ns_add_host", 00:09:08.915 "nvmf_subsystem_remove_ns", 00:09:08.915 "nvmf_subsystem_set_ns_ana_group", 00:09:08.915 "nvmf_subsystem_add_ns", 00:09:08.915 "nvmf_subsystem_listener_set_ana_state", 00:09:08.915 "nvmf_discovery_get_referrals", 00:09:08.915 "nvmf_discovery_remove_referral", 00:09:08.915 "nvmf_discovery_add_referral", 00:09:08.915 "nvmf_subsystem_remove_listener", 00:09:08.915 "nvmf_subsystem_add_listener", 00:09:08.915 "nvmf_delete_subsystem", 00:09:08.915 "nvmf_create_subsystem", 00:09:08.915 "nvmf_get_subsystems", 00:09:08.915 "env_dpdk_get_mem_stats", 00:09:08.915 "nbd_get_disks", 00:09:08.915 "nbd_stop_disk", 00:09:08.915 "nbd_start_disk", 00:09:08.915 "ublk_recover_disk", 00:09:08.915 "ublk_get_disks", 00:09:08.915 "ublk_stop_disk", 00:09:08.915 "ublk_start_disk", 00:09:08.915 "ublk_destroy_target", 00:09:08.915 "ublk_create_target", 00:09:08.915 "virtio_blk_create_transport", 00:09:08.915 "virtio_blk_get_transports", 00:09:08.915 "vhost_controller_set_coalescing", 00:09:08.915 "vhost_get_controllers", 00:09:08.915 "vhost_delete_controller", 00:09:08.915 "vhost_create_blk_controller", 00:09:08.915 "vhost_scsi_controller_remove_target", 00:09:08.915 "vhost_scsi_controller_add_target", 00:09:08.915 "vhost_start_scsi_controller", 00:09:08.915 "vhost_create_scsi_controller", 00:09:08.915 "thread_set_cpumask", 00:09:08.915 "scheduler_set_options", 00:09:08.915 "framework_get_governor", 00:09:08.915 "framework_get_scheduler", 00:09:08.915 "framework_set_scheduler", 00:09:08.915 "framework_get_reactors", 00:09:08.915 "thread_get_io_channels", 00:09:08.915 "thread_get_pollers", 00:09:08.915 "thread_get_stats", 00:09:08.915 "framework_monitor_context_switch", 00:09:08.915 "spdk_kill_instance", 00:09:08.916 "log_enable_timestamps", 00:09:08.916 "log_get_flags", 00:09:08.916 "log_clear_flag", 00:09:08.916 "log_set_flag", 00:09:08.916 "log_get_level", 00:09:08.916 "log_set_level", 00:09:08.916 "log_get_print_level", 00:09:08.916 "log_set_print_level", 00:09:08.916 "framework_enable_cpumask_locks", 00:09:08.916 "framework_disable_cpumask_locks", 00:09:08.916 "framework_wait_init", 00:09:08.916 "framework_start_init", 00:09:08.916 "scsi_get_devices", 00:09:08.916 "bdev_get_histogram", 00:09:08.916 "bdev_enable_histogram", 00:09:08.916 "bdev_set_qos_limit", 00:09:08.916 "bdev_set_qd_sampling_period", 00:09:08.916 "bdev_get_bdevs", 00:09:08.916 "bdev_reset_iostat", 00:09:08.916 "bdev_get_iostat", 00:09:08.916 "bdev_examine", 00:09:08.916 "bdev_wait_for_examine", 00:09:08.916 "bdev_set_options", 00:09:08.916 "accel_get_stats", 00:09:08.916 "accel_set_options", 00:09:08.916 "accel_set_driver", 00:09:08.916 "accel_crypto_key_destroy", 00:09:08.916 "accel_crypto_keys_get", 00:09:08.916 "accel_crypto_key_create", 00:09:08.916 "accel_assign_opc", 00:09:08.916 "accel_get_module_info", 00:09:08.916 "accel_get_opc_assignments", 00:09:08.916 "vmd_rescan", 00:09:08.916 "vmd_remove_device", 00:09:08.916 "vmd_enable", 00:09:08.916 "sock_get_default_impl", 00:09:08.916 "sock_set_default_impl", 00:09:08.916 "sock_impl_set_options", 00:09:08.916 "sock_impl_get_options", 00:09:08.916 "iobuf_get_stats", 00:09:08.916 "iobuf_set_options", 00:09:08.916 "keyring_get_keys", 00:09:08.916 "vfu_tgt_set_base_path", 00:09:08.916 "framework_get_pci_devices", 00:09:08.916 "framework_get_config", 00:09:08.916 "framework_get_subsystems", 00:09:08.916 "fsdev_set_opts", 00:09:08.916 "fsdev_get_opts", 00:09:08.916 "trace_get_info", 00:09:08.916 "trace_get_tpoint_group_mask", 00:09:08.916 "trace_disable_tpoint_group", 00:09:08.916 "trace_enable_tpoint_group", 00:09:08.916 "trace_clear_tpoint_mask", 00:09:08.916 "trace_set_tpoint_mask", 00:09:08.916 "notify_get_notifications", 00:09:08.916 "notify_get_types", 00:09:08.916 "spdk_get_version", 00:09:08.916 "rpc_get_methods" 00:09:08.916 ] 00:09:08.916 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.916 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:08.916 14:02:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2593740 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2593740 ']' 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2593740 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593740 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593740' 00:09:08.916 killing process with pid 2593740 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2593740 00:09:08.916 14:02:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2593740 00:09:09.177 00:09:09.177 real 0m1.536s 00:09:09.177 user 0m2.834s 00:09:09.177 sys 0m0.450s 00:09:09.177 14:02:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.177 14:02:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.177 ************************************ 00:09:09.177 END TEST spdkcli_tcp 00:09:09.177 ************************************ 00:09:09.177 14:02:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:09.177 14:02:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.177 14:02:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.177 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:09:09.177 ************************************ 00:09:09.177 START TEST dpdk_mem_utility 00:09:09.177 ************************************ 00:09:09.177 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:09.438 * Looking for test storage... 00:09:09.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.438 14:02:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.438 --rc genhtml_branch_coverage=1 00:09:09.438 --rc genhtml_function_coverage=1 00:09:09.438 --rc genhtml_legend=1 00:09:09.438 --rc geninfo_all_blocks=1 00:09:09.438 --rc geninfo_unexecuted_blocks=1 00:09:09.438 00:09:09.438 ' 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.438 --rc genhtml_branch_coverage=1 00:09:09.438 --rc genhtml_function_coverage=1 00:09:09.438 --rc genhtml_legend=1 00:09:09.438 --rc geninfo_all_blocks=1 00:09:09.438 --rc geninfo_unexecuted_blocks=1 00:09:09.438 00:09:09.438 ' 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.438 --rc genhtml_branch_coverage=1 00:09:09.438 --rc genhtml_function_coverage=1 00:09:09.438 --rc genhtml_legend=1 00:09:09.438 --rc geninfo_all_blocks=1 00:09:09.438 --rc geninfo_unexecuted_blocks=1 00:09:09.438 00:09:09.438 ' 00:09:09.438 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.438 --rc genhtml_branch_coverage=1 00:09:09.438 --rc genhtml_function_coverage=1 00:09:09.438 --rc genhtml_legend=1 00:09:09.438 --rc geninfo_all_blocks=1 00:09:09.438 --rc geninfo_unexecuted_blocks=1 00:09:09.438 00:09:09.438 ' 00:09:09.438 14:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:09.438 14:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2594149 00:09:09.438 14:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2594149 00:09:09.439 14:02:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2594149 ']' 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.439 14:02:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:09.439 [2024-12-06 14:02:57.995435] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:09.439 [2024-12-06 14:02:57.995522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594149 ] 00:09:09.698 [2024-12-06 14:02:58.082774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.698 [2024-12-06 14:02:58.124295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.274 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.274 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:10.274 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:10.274 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:10.274 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.274 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.274 { 00:09:10.274 "filename": "/tmp/spdk_mem_dump.txt" 00:09:10.274 } 00:09:10.274 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.274 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:10.274 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:10.274 1 heaps totaling size 818.000000 MiB 00:09:10.274 size: 818.000000 MiB heap id: 0 00:09:10.274 end heaps---------- 00:09:10.274 9 mempools totaling size 603.782043 MiB 00:09:10.274 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:10.274 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:10.274 size: 100.555481 MiB name: bdev_io_2594149 00:09:10.274 size: 50.003479 MiB name: msgpool_2594149 00:09:10.274 size: 36.509338 MiB name: fsdev_io_2594149 00:09:10.274 size: 21.763794 MiB name: PDU_Pool 00:09:10.274 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:10.274 size: 4.133484 MiB name: evtpool_2594149 00:09:10.274 size: 0.026123 MiB name: Session_Pool 00:09:10.274 end mempools------- 00:09:10.274 6 memzones totaling size 4.142822 MiB 00:09:10.274 size: 1.000366 MiB name: RG_ring_0_2594149 00:09:10.274 size: 1.000366 MiB name: RG_ring_1_2594149 00:09:10.274 size: 1.000366 MiB name: RG_ring_4_2594149 00:09:10.274 size: 1.000366 MiB name: RG_ring_5_2594149 00:09:10.274 size: 0.125366 MiB name: RG_ring_2_2594149 00:09:10.274 size: 0.015991 MiB name: RG_ring_3_2594149 00:09:10.274 end memzones------- 00:09:10.274 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:10.274 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:10.274 list of free elements. size: 10.852478 MiB 00:09:10.274 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:10.274 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:10.274 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:10.274 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:10.274 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:10.274 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:10.274 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:10.274 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:10.274 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:09:10.274 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:10.274 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:10.274 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:10.274 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:10.274 element at address: 0x200028200000 with size: 0.410034 MiB 00:09:10.274 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:10.274 list of standard malloc elements. size: 199.218628 MiB 00:09:10.274 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:10.274 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:10.274 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:10.274 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:10.274 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:10.274 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:10.274 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:10.274 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:10.274 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:10.274 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:10.274 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200028268f80 with size: 0.000183 MiB 00:09:10.274 element at address: 0x200028269040 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:10.274 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:10.274 list of memzone associated elements. size: 607.928894 MiB 00:09:10.274 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:10.274 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:10.274 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:10.274 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:10.274 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:10.274 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2594149_0 00:09:10.274 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:10.274 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2594149_0 00:09:10.274 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:10.274 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2594149_0 00:09:10.274 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:10.274 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:10.274 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:10.274 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:10.274 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:10.274 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2594149_0 00:09:10.274 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:10.274 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2594149 00:09:10.274 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:10.274 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2594149 00:09:10.274 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:10.274 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:10.274 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:10.274 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:10.274 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:10.274 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:10.274 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:10.274 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:10.275 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:10.275 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2594149 00:09:10.275 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:10.275 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2594149 00:09:10.275 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:10.275 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2594149 00:09:10.275 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:10.275 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2594149 00:09:10.275 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:10.275 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2594149 00:09:10.275 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:10.275 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2594149 00:09:10.275 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:10.275 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:10.275 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:10.275 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:10.275 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:10.275 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:10.275 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:10.275 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2594149 00:09:10.275 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:10.275 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2594149 00:09:10.275 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:10.275 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:10.275 element at address: 0x200028269100 with size: 0.023743 MiB 00:09:10.275 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:10.275 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:10.275 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2594149 00:09:10.275 element at address: 0x20002826f240 with size: 0.002441 MiB 00:09:10.275 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:10.275 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:10.275 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2594149 00:09:10.275 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:10.275 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2594149 00:09:10.275 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:10.275 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2594149 00:09:10.275 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:09:10.275 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:10.275 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:10.275 14:02:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2594149 00:09:10.275 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2594149 ']' 00:09:10.275 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2594149 00:09:10.275 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:10.275 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.275 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594149 00:09:10.535 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.535 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.535 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594149' 00:09:10.535 killing process with pid 2594149 00:09:10.535 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2594149 00:09:10.535 14:02:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2594149 00:09:10.535 00:09:10.535 real 0m1.401s 00:09:10.535 user 0m1.469s 00:09:10.535 sys 0m0.424s 00:09:10.535 14:02:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.535 14:02:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:10.535 ************************************ 00:09:10.535 END TEST dpdk_mem_utility 00:09:10.535 ************************************ 00:09:10.797 14:02:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:10.797 14:02:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.797 14:02:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.797 14:02:59 -- common/autotest_common.sh@10 -- # set +x 00:09:10.797 ************************************ 00:09:10.797 START TEST event 00:09:10.797 ************************************ 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:10.797 * Looking for test storage... 00:09:10.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.797 14:02:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.797 14:02:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.797 14:02:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.797 14:02:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.797 14:02:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.797 14:02:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.797 14:02:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.797 14:02:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.797 14:02:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.797 14:02:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.797 14:02:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.797 14:02:59 event -- scripts/common.sh@344 -- # case "$op" in 00:09:10.797 14:02:59 event -- scripts/common.sh@345 -- # : 1 00:09:10.797 14:02:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.797 14:02:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.797 14:02:59 event -- scripts/common.sh@365 -- # decimal 1 00:09:10.797 14:02:59 event -- scripts/common.sh@353 -- # local d=1 00:09:10.797 14:02:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.797 14:02:59 event -- scripts/common.sh@355 -- # echo 1 00:09:10.797 14:02:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.797 14:02:59 event -- scripts/common.sh@366 -- # decimal 2 00:09:10.797 14:02:59 event -- scripts/common.sh@353 -- # local d=2 00:09:10.797 14:02:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.797 14:02:59 event -- scripts/common.sh@355 -- # echo 2 00:09:10.797 14:02:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.797 14:02:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.797 14:02:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.797 14:02:59 event -- scripts/common.sh@368 -- # return 0 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.797 --rc genhtml_branch_coverage=1 00:09:10.797 --rc genhtml_function_coverage=1 00:09:10.797 --rc genhtml_legend=1 00:09:10.797 --rc geninfo_all_blocks=1 00:09:10.797 --rc geninfo_unexecuted_blocks=1 00:09:10.797 00:09:10.797 ' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.797 --rc genhtml_branch_coverage=1 00:09:10.797 --rc genhtml_function_coverage=1 00:09:10.797 --rc genhtml_legend=1 00:09:10.797 --rc geninfo_all_blocks=1 00:09:10.797 --rc geninfo_unexecuted_blocks=1 00:09:10.797 00:09:10.797 ' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.797 --rc genhtml_branch_coverage=1 00:09:10.797 --rc genhtml_function_coverage=1 00:09:10.797 --rc genhtml_legend=1 00:09:10.797 --rc geninfo_all_blocks=1 00:09:10.797 --rc geninfo_unexecuted_blocks=1 00:09:10.797 00:09:10.797 ' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.797 --rc genhtml_branch_coverage=1 00:09:10.797 --rc genhtml_function_coverage=1 00:09:10.797 --rc genhtml_legend=1 00:09:10.797 --rc geninfo_all_blocks=1 00:09:10.797 --rc geninfo_unexecuted_blocks=1 00:09:10.797 00:09:10.797 ' 00:09:10.797 14:02:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:10.797 14:02:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:10.797 14:02:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:10.797 14:02:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.797 14:02:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:11.058 ************************************ 00:09:11.058 START TEST event_perf 00:09:11.058 ************************************ 00:09:11.058 14:02:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:11.058 Running I/O for 1 seconds...[2024-12-06 14:02:59.474564] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:11.058 [2024-12-06 14:02:59.474678] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594518 ] 00:09:11.058 [2024-12-06 14:02:59.567003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.058 [2024-12-06 14:02:59.610328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.058 [2024-12-06 14:02:59.610452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.058 [2024-12-06 14:02:59.610607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.058 [2024-12-06 14:02:59.610733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.070 Running I/O for 1 seconds... 00:09:12.070 lcore 0: 172911 00:09:12.070 lcore 1: 172914 00:09:12.070 lcore 2: 172913 00:09:12.070 lcore 3: 172914 00:09:12.070 done. 00:09:12.070 00:09:12.070 real 0m1.187s 00:09:12.070 user 0m4.089s 00:09:12.070 sys 0m0.094s 00:09:12.070 14:03:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.070 14:03:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:12.070 ************************************ 00:09:12.070 END TEST event_perf 00:09:12.070 ************************************ 00:09:12.070 14:03:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:12.070 14:03:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:12.070 14:03:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.070 14:03:00 event -- common/autotest_common.sh@10 -- # set +x 00:09:12.330 ************************************ 00:09:12.330 START TEST event_reactor 00:09:12.330 ************************************ 00:09:12.330 14:03:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:12.330 [2024-12-06 14:03:00.738450] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:12.330 [2024-12-06 14:03:00.738594] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594686 ] 00:09:12.330 [2024-12-06 14:03:00.828592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.330 [2024-12-06 14:03:00.862171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.274 test_start 00:09:13.274 oneshot 00:09:13.274 tick 100 00:09:13.274 tick 100 00:09:13.274 tick 250 00:09:13.274 tick 100 00:09:13.274 tick 100 00:09:13.274 tick 250 00:09:13.274 tick 100 00:09:13.274 tick 500 00:09:13.274 tick 100 00:09:13.274 tick 100 00:09:13.274 tick 250 00:09:13.274 tick 100 00:09:13.274 tick 100 00:09:13.274 test_end 00:09:13.274 00:09:13.274 real 0m1.172s 00:09:13.274 user 0m1.093s 00:09:13.274 sys 0m0.074s 00:09:13.274 14:03:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.274 14:03:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:13.274 ************************************ 00:09:13.274 END TEST event_reactor 00:09:13.274 ************************************ 00:09:13.534 14:03:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:13.534 14:03:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.534 14:03:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.534 14:03:01 event -- common/autotest_common.sh@10 -- # set +x 00:09:13.534 ************************************ 00:09:13.534 START TEST event_reactor_perf 00:09:13.534 ************************************ 00:09:13.534 14:03:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:13.534 [2024-12-06 14:03:01.989134] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:13.534 [2024-12-06 14:03:01.989233] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595054 ] 00:09:13.534 [2024-12-06 14:03:02.079013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.534 [2024-12-06 14:03:02.117074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.927 test_start 00:09:14.927 test_end 00:09:14.927 Performance: 533747 events per second 00:09:14.927 00:09:14.927 real 0m1.177s 00:09:14.927 user 0m1.090s 00:09:14.927 sys 0m0.082s 00:09:14.927 14:03:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.927 14:03:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:14.927 ************************************ 00:09:14.927 END TEST event_reactor_perf 00:09:14.927 ************************************ 00:09:14.927 14:03:03 event -- event/event.sh@49 -- # uname -s 00:09:14.927 14:03:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:14.927 14:03:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:14.927 14:03:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.927 14:03:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.927 14:03:03 event -- common/autotest_common.sh@10 -- # set +x 00:09:14.927 ************************************ 00:09:14.927 START TEST event_scheduler 00:09:14.927 ************************************ 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:14.927 * Looking for test storage... 00:09:14.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.927 14:03:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.927 --rc genhtml_branch_coverage=1 00:09:14.927 --rc genhtml_function_coverage=1 00:09:14.927 --rc genhtml_legend=1 00:09:14.927 --rc geninfo_all_blocks=1 00:09:14.927 --rc geninfo_unexecuted_blocks=1 00:09:14.927 00:09:14.927 ' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.927 --rc genhtml_branch_coverage=1 00:09:14.927 --rc genhtml_function_coverage=1 00:09:14.927 --rc genhtml_legend=1 00:09:14.927 --rc geninfo_all_blocks=1 00:09:14.927 --rc geninfo_unexecuted_blocks=1 00:09:14.927 00:09:14.927 ' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.927 --rc genhtml_branch_coverage=1 00:09:14.927 --rc genhtml_function_coverage=1 00:09:14.927 --rc genhtml_legend=1 00:09:14.927 --rc geninfo_all_blocks=1 00:09:14.927 --rc geninfo_unexecuted_blocks=1 00:09:14.927 00:09:14.927 ' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.927 --rc genhtml_branch_coverage=1 00:09:14.927 --rc genhtml_function_coverage=1 00:09:14.927 --rc genhtml_legend=1 00:09:14.927 --rc geninfo_all_blocks=1 00:09:14.927 --rc geninfo_unexecuted_blocks=1 00:09:14.927 00:09:14.927 ' 00:09:14.927 14:03:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:14.927 14:03:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2595440 00:09:14.927 14:03:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.927 14:03:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:14.927 14:03:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2595440 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2595440 ']' 00:09:14.927 14:03:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.928 14:03:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.928 14:03:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.928 14:03:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.928 14:03:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:14.928 [2024-12-06 14:03:03.477639] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:14.928 [2024-12-06 14:03:03.477715] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595440 ] 00:09:15.189 [2024-12-06 14:03:03.569295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.189 [2024-12-06 14:03:03.614925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.189 [2024-12-06 14:03:03.615083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.189 [2024-12-06 14:03:03.615242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.189 [2024-12-06 14:03:03.615243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:15.761 14:03:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.761 [2024-12-06 14:03:04.289629] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:15.761 [2024-12-06 14:03:04.289651] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:15.761 [2024-12-06 14:03:04.289662] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:15.761 [2024-12-06 14:03:04.289668] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:15.761 [2024-12-06 14:03:04.289674] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.761 14:03:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.761 [2024-12-06 14:03:04.357819] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.761 14:03:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.761 14:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:15.761 ************************************ 00:09:15.761 START TEST scheduler_create_thread 00:09:15.761 ************************************ 00:09:15.761 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:15.761 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:15.761 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.761 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 2 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 3 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 4 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 5 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 6 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 7 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 8 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.022 9 00:09:16.022 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.023 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:16.023 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.023 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:16.595 10 00:09:16.595 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.595 14:03:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:16.595 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.595 14:03:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.981 14:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.981 14:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:17.981 14:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:17.981 14:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.981 14:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:18.552 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.552 14:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:18.552 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.552 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.493 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.493 14:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:19.493 14:03:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:19.493 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.493 14:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.089 14:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.089 00:09:20.089 real 0m4.224s 00:09:20.089 user 0m0.023s 00:09:20.089 sys 0m0.008s 00:09:20.089 14:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.089 14:03:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.089 ************************************ 00:09:20.089 END TEST scheduler_create_thread 00:09:20.089 ************************************ 00:09:20.089 14:03:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:20.089 14:03:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2595440 00:09:20.089 14:03:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2595440 ']' 00:09:20.089 14:03:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2595440 00:09:20.089 14:03:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:20.089 14:03:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.089 14:03:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595440 00:09:20.349 14:03:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:20.349 14:03:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:20.349 14:03:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595440' 00:09:20.349 killing process with pid 2595440 00:09:20.349 14:03:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2595440 00:09:20.349 14:03:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2595440 00:09:20.349 [2024-12-06 14:03:08.899529] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:20.610 00:09:20.610 real 0m5.832s 00:09:20.610 user 0m12.892s 00:09:20.610 sys 0m0.430s 00:09:20.610 14:03:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.610 14:03:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:20.610 ************************************ 00:09:20.610 END TEST event_scheduler 00:09:20.610 ************************************ 00:09:20.610 14:03:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:20.610 14:03:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:20.610 14:03:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.610 14:03:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.610 14:03:09 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.610 ************************************ 00:09:20.610 START TEST app_repeat 00:09:20.610 ************************************ 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2596778 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2596778' 00:09:20.610 Process app_repeat pid: 2596778 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:20.610 spdk_app_start Round 0 00:09:20.610 14:03:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2596778 /var/tmp/spdk-nbd.sock 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2596778 ']' 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.610 14:03:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:20.610 [2024-12-06 14:03:09.179605] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:20.610 [2024-12-06 14:03:09.179674] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596778 ] 00:09:20.871 [2024-12-06 14:03:09.278668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.871 [2024-12-06 14:03:09.312763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.871 [2024-12-06 14:03:09.312765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.871 14:03:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.871 14:03:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:20.871 14:03:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.132 Malloc0 00:09:21.132 14:03:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.132 Malloc1 00:09:21.132 14:03:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.132 14:03:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:21.394 /dev/nbd0 00:09:21.394 14:03:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:21.394 14:03:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:21.394 1+0 records in 00:09:21.394 1+0 records out 00:09:21.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277309 s, 14.8 MB/s 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:21.394 14:03:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:21.394 14:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.394 14:03:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.394 14:03:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:21.654 /dev/nbd1 00:09:21.654 14:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:21.654 14:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:21.654 14:03:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:21.654 14:03:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:21.654 14:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:21.654 14:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:21.655 1+0 records in 00:09:21.655 1+0 records out 00:09:21.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274511 s, 14.9 MB/s 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:21.655 14:03:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:21.655 14:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.655 14:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.655 14:03:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.655 14:03:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.655 14:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.914 14:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:21.914 { 00:09:21.914 "nbd_device": "/dev/nbd0", 00:09:21.914 "bdev_name": "Malloc0" 00:09:21.914 }, 00:09:21.914 { 00:09:21.914 "nbd_device": "/dev/nbd1", 00:09:21.914 "bdev_name": "Malloc1" 00:09:21.915 } 00:09:21.915 ]' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:21.915 { 00:09:21.915 "nbd_device": "/dev/nbd0", 00:09:21.915 "bdev_name": "Malloc0" 00:09:21.915 }, 00:09:21.915 { 00:09:21.915 "nbd_device": "/dev/nbd1", 00:09:21.915 "bdev_name": "Malloc1" 00:09:21.915 } 00:09:21.915 ]' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:21.915 /dev/nbd1' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:21.915 /dev/nbd1' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:21.915 256+0 records in 00:09:21.915 256+0 records out 00:09:21.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126037 s, 83.2 MB/s 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:21.915 256+0 records in 00:09:21.915 256+0 records out 00:09:21.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120088 s, 87.3 MB/s 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:21.915 256+0 records in 00:09:21.915 256+0 records out 00:09:21.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136367 s, 76.9 MB/s 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.915 14:03:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.174 14:03:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.433 14:03:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.433 14:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:22.433 14:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:22.433 14:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:22.693 14:03:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:22.693 14:03:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:22.693 14:03:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:22.952 [2024-12-06 14:03:11.367610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.952 [2024-12-06 14:03:11.397316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.952 [2024-12-06 14:03:11.397317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.952 [2024-12-06 14:03:11.426353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:22.952 [2024-12-06 14:03:11.426383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:26.253 14:03:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:26.253 14:03:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:26.253 spdk_app_start Round 1 00:09:26.253 14:03:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2596778 /var/tmp/spdk-nbd.sock 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2596778 ']' 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:26.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.253 14:03:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:26.253 14:03:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:26.253 Malloc0 00:09:26.254 14:03:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:26.254 Malloc1 00:09:26.254 14:03:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:26.254 14:03:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:26.516 /dev/nbd0 00:09:26.516 14:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:26.516 14:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:26.516 1+0 records in 00:09:26.516 1+0 records out 00:09:26.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269025 s, 15.2 MB/s 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:26.516 14:03:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:26.516 14:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.516 14:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:26.516 14:03:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:26.777 /dev/nbd1 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:26.777 1+0 records in 00:09:26.777 1+0 records out 00:09:26.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288997 s, 14.2 MB/s 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:26.777 14:03:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.777 14:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:27.068 { 00:09:27.068 "nbd_device": "/dev/nbd0", 00:09:27.068 "bdev_name": "Malloc0" 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "nbd_device": "/dev/nbd1", 00:09:27.068 "bdev_name": "Malloc1" 00:09:27.068 } 00:09:27.068 ]' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:27.068 { 00:09:27.068 "nbd_device": "/dev/nbd0", 00:09:27.068 "bdev_name": "Malloc0" 00:09:27.068 }, 00:09:27.068 { 00:09:27.068 "nbd_device": "/dev/nbd1", 00:09:27.068 "bdev_name": "Malloc1" 00:09:27.068 } 00:09:27.068 ]' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:27.068 /dev/nbd1' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:27.068 /dev/nbd1' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:27.068 256+0 records in 00:09:27.068 256+0 records out 00:09:27.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127233 s, 82.4 MB/s 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:27.068 256+0 records in 00:09:27.068 256+0 records out 00:09:27.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124296 s, 84.4 MB/s 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:27.068 256+0 records in 00:09:27.068 256+0 records out 00:09:27.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136154 s, 77.0 MB/s 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.068 14:03:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.329 14:03:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.591 14:03:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.591 14:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:27.852 14:03:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.852 14:03:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.852 14:03:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:27.852 14:03:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:27.852 14:03:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:27.852 14:03:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:27.852 14:03:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:28.114 [2024-12-06 14:03:16.497987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.114 [2024-12-06 14:03:16.527222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.114 [2024-12-06 14:03:16.527222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.114 [2024-12-06 14:03:16.556802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:28.114 [2024-12-06 14:03:16.556832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:31.416 spdk_app_start Round 2 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2596778 /var/tmp/spdk-nbd.sock 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2596778 ']' 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:31.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.416 14:03:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.416 Malloc0 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:31.416 Malloc1 00:09:31.416 14:03:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:31.416 14:03:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.417 14:03:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:31.677 /dev/nbd0 00:09:31.678 14:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.678 14:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.678 1+0 records in 00:09:31.678 1+0 records out 00:09:31.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309044 s, 13.3 MB/s 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.678 14:03:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:31.678 14:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.678 14:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.678 14:03:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:31.678 /dev/nbd1 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:31.938 1+0 records in 00:09:31.938 1+0 records out 00:09:31.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309471 s, 13.2 MB/s 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:31.938 14:03:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.938 { 00:09:31.938 "nbd_device": "/dev/nbd0", 00:09:31.938 "bdev_name": "Malloc0" 00:09:31.938 }, 00:09:31.938 { 00:09:31.938 "nbd_device": "/dev/nbd1", 00:09:31.938 "bdev_name": "Malloc1" 00:09:31.938 } 00:09:31.938 ]' 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.938 { 00:09:31.938 "nbd_device": "/dev/nbd0", 00:09:31.938 "bdev_name": "Malloc0" 00:09:31.938 }, 00:09:31.938 { 00:09:31.938 "nbd_device": "/dev/nbd1", 00:09:31.938 "bdev_name": "Malloc1" 00:09:31.938 } 00:09:31.938 ]' 00:09:31.938 14:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:32.199 /dev/nbd1' 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:32.199 /dev/nbd1' 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:32.199 256+0 records in 00:09:32.199 256+0 records out 00:09:32.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121816 s, 86.1 MB/s 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.199 14:03:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:32.199 256+0 records in 00:09:32.199 256+0 records out 00:09:32.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121288 s, 86.5 MB/s 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.200 256+0 records in 00:09:32.200 256+0 records out 00:09:32.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128942 s, 81.3 MB/s 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.200 14:03:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.461 14:03:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.461 14:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:32.722 14:03:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:32.722 14:03:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:32.981 14:03:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:32.981 [2024-12-06 14:03:21.544783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.981 [2024-12-06 14:03:21.574382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.981 [2024-12-06 14:03:21.574383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.981 [2024-12-06 14:03:21.603533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:32.981 [2024-12-06 14:03:21.603564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:36.277 14:03:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2596778 /var/tmp/spdk-nbd.sock 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2596778 ']' 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:36.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:36.277 14:03:24 event.app_repeat -- event/event.sh@39 -- # killprocess 2596778 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2596778 ']' 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2596778 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2596778 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2596778' 00:09:36.277 killing process with pid 2596778 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2596778 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2596778 00:09:36.277 spdk_app_start is called in Round 0. 00:09:36.277 Shutdown signal received, stop current app iteration 00:09:36.277 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:36.277 spdk_app_start is called in Round 1. 00:09:36.277 Shutdown signal received, stop current app iteration 00:09:36.277 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:36.277 spdk_app_start is called in Round 2. 00:09:36.277 Shutdown signal received, stop current app iteration 00:09:36.277 Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 reinitialization... 00:09:36.277 spdk_app_start is called in Round 3. 00:09:36.277 Shutdown signal received, stop current app iteration 00:09:36.277 14:03:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:36.277 14:03:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:36.277 00:09:36.277 real 0m15.632s 00:09:36.277 user 0m34.415s 00:09:36.277 sys 0m2.290s 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.277 14:03:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:36.277 ************************************ 00:09:36.277 END TEST app_repeat 00:09:36.277 ************************************ 00:09:36.277 14:03:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:36.277 14:03:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:36.277 14:03:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.277 14:03:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.277 14:03:24 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.277 ************************************ 00:09:36.277 START TEST cpu_locks 00:09:36.277 ************************************ 00:09:36.277 14:03:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:36.538 * Looking for test storage... 00:09:36.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:36.538 14:03:24 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.538 14:03:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.538 14:03:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.538 14:03:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.538 --rc genhtml_branch_coverage=1 00:09:36.538 --rc genhtml_function_coverage=1 00:09:36.538 --rc genhtml_legend=1 00:09:36.538 --rc geninfo_all_blocks=1 00:09:36.538 --rc geninfo_unexecuted_blocks=1 00:09:36.538 00:09:36.538 ' 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.538 --rc genhtml_branch_coverage=1 00:09:36.538 --rc genhtml_function_coverage=1 00:09:36.538 --rc genhtml_legend=1 00:09:36.538 --rc geninfo_all_blocks=1 00:09:36.538 --rc geninfo_unexecuted_blocks=1 00:09:36.538 00:09:36.538 ' 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.538 --rc genhtml_branch_coverage=1 00:09:36.538 --rc genhtml_function_coverage=1 00:09:36.538 --rc genhtml_legend=1 00:09:36.538 --rc geninfo_all_blocks=1 00:09:36.538 --rc geninfo_unexecuted_blocks=1 00:09:36.538 00:09:36.538 ' 00:09:36.538 14:03:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.538 --rc genhtml_branch_coverage=1 00:09:36.538 --rc genhtml_function_coverage=1 00:09:36.538 --rc genhtml_legend=1 00:09:36.538 --rc geninfo_all_blocks=1 00:09:36.538 --rc geninfo_unexecuted_blocks=1 00:09:36.538 00:09:36.538 ' 00:09:36.538 14:03:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:36.538 14:03:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:36.539 14:03:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:36.539 14:03:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:36.539 14:03:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.539 14:03:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.539 14:03:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.539 ************************************ 00:09:36.539 START TEST default_locks 00:09:36.539 ************************************ 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2600552 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2600552 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2600552 ']' 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.539 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.539 [2024-12-06 14:03:25.152605] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:36.539 [2024-12-06 14:03:25.152674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600552 ] 00:09:36.799 [2024-12-06 14:03:25.241876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.799 [2024-12-06 14:03:25.276163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.370 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.370 14:03:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:37.370 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2600552 00:09:37.370 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2600552 00:09:37.370 14:03:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:37.658 lslocks: write error 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2600552 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2600552 ']' 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2600552 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600552 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600552' 00:09:37.658 killing process with pid 2600552 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2600552 00:09:37.658 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2600552 00:09:37.918 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2600552 00:09:37.918 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2600552 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2600552 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2600552 ']' 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2600552) - No such process 00:09:37.919 ERROR: process (pid: 2600552) is no longer running 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:37.919 00:09:37.919 real 0m1.373s 00:09:37.919 user 0m1.492s 00:09:37.919 sys 0m0.468s 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.919 14:03:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 ************************************ 00:09:37.919 END TEST default_locks 00:09:37.919 ************************************ 00:09:37.919 14:03:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:37.919 14:03:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.919 14:03:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.919 14:03:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 ************************************ 00:09:37.919 START TEST default_locks_via_rpc 00:09:37.919 ************************************ 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2600920 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2600920 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2600920 ']' 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.919 14:03:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 [2024-12-06 14:03:26.598191] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:38.179 [2024-12-06 14:03:26.598244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600920 ] 00:09:38.179 [2024-12-06 14:03:26.681225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.179 [2024-12-06 14:03:26.714747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2600920 ']' 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600920' 00:09:39.119 killing process with pid 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2600920 00:09:39.119 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2600920 00:09:39.379 00:09:39.379 real 0m1.365s 00:09:39.379 user 0m1.504s 00:09:39.379 sys 0m0.447s 00:09:39.379 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.379 14:03:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.379 ************************************ 00:09:39.379 END TEST default_locks_via_rpc 00:09:39.379 ************************************ 00:09:39.379 14:03:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:39.379 14:03:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.379 14:03:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.379 14:03:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:39.379 ************************************ 00:09:39.379 START TEST non_locking_app_on_locked_coremask 00:09:39.379 ************************************ 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2601155 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2601155 /var/tmp/spdk.sock 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2601155 ']' 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.379 14:03:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:39.639 [2024-12-06 14:03:28.038553] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:39.639 [2024-12-06 14:03:28.038609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601155 ] 00:09:39.639 [2024-12-06 14:03:28.122788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.639 [2024-12-06 14:03:28.157862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2601296 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2601296 /var/tmp/spdk2.sock 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2601296 ']' 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.210 14:03:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.470 [2024-12-06 14:03:28.883882] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:40.470 [2024-12-06 14:03:28.883935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601296 ] 00:09:40.470 [2024-12-06 14:03:28.970232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:40.470 [2024-12-06 14:03:28.970253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.470 [2024-12-06 14:03:29.028408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.040 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.040 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:41.040 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2601155 00:09:41.040 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:41.040 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2601155 00:09:41.613 lslocks: write error 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2601155 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2601155 ']' 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2601155 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.613 14:03:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601155 00:09:41.613 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.613 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.613 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601155' 00:09:41.613 killing process with pid 2601155 00:09:41.613 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2601155 00:09:41.613 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2601155 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2601296 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2601296 ']' 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2601296 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601296 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601296' 00:09:41.875 killing process with pid 2601296 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2601296 00:09:41.875 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2601296 00:09:42.135 00:09:42.135 real 0m2.662s 00:09:42.135 user 0m2.970s 00:09:42.135 sys 0m0.795s 00:09:42.135 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.135 14:03:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.135 ************************************ 00:09:42.135 END TEST non_locking_app_on_locked_coremask 00:09:42.135 ************************************ 00:09:42.135 14:03:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:42.135 14:03:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.135 14:03:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.135 14:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:42.135 ************************************ 00:09:42.135 START TEST locking_app_on_unlocked_coremask 00:09:42.135 ************************************ 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2601671 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2601671 /var/tmp/spdk.sock 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2601671 ']' 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.135 14:03:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.396 [2024-12-06 14:03:30.778622] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:42.396 [2024-12-06 14:03:30.778672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601671 ] 00:09:42.396 [2024-12-06 14:03:30.863945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:42.396 [2024-12-06 14:03:30.863971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.396 [2024-12-06 14:03:30.893516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2601987 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2601987 /var/tmp/spdk2.sock 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2601987 ']' 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.967 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.968 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.968 14:03:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.229 [2024-12-06 14:03:31.613760] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:43.229 [2024-12-06 14:03:31.613814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601987 ] 00:09:43.229 [2024-12-06 14:03:31.699736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.229 [2024-12-06 14:03:31.762133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.800 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.800 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:43.800 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2601987 00:09:43.800 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2601987 00:09:43.800 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:44.370 lslocks: write error 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2601671 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2601671 ']' 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2601671 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601671 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601671' 00:09:44.370 killing process with pid 2601671 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2601671 00:09:44.370 14:03:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2601671 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2601987 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2601987 ']' 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2601987 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.631 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601987 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601987' 00:09:44.890 killing process with pid 2601987 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2601987 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2601987 00:09:44.890 00:09:44.890 real 0m2.735s 00:09:44.890 user 0m3.072s 00:09:44.890 sys 0m0.814s 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.890 14:03:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.890 ************************************ 00:09:44.890 END TEST locking_app_on_unlocked_coremask 00:09:44.890 ************************************ 00:09:44.890 14:03:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:44.890 14:03:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.890 14:03:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.890 14:03:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:45.228 ************************************ 00:09:45.228 START TEST locking_app_on_locked_coremask 00:09:45.228 ************************************ 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2602374 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2602374 /var/tmp/spdk.sock 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2602374 ']' 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.228 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.229 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.229 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.229 14:03:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.229 [2024-12-06 14:03:33.591915] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:45.229 [2024-12-06 14:03:33.591968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602374 ] 00:09:45.229 [2024-12-06 14:03:33.677715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.229 [2024-12-06 14:03:33.708085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2602410 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2602410 /var/tmp/spdk2.sock 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2602410 /var/tmp/spdk2.sock 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2602410 /var/tmp/spdk2.sock 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2602410 ']' 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.814 14:03:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.814 [2024-12-06 14:03:34.442325] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:45.814 [2024-12-06 14:03:34.442380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602410 ] 00:09:46.074 [2024-12-06 14:03:34.531286] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2602374 has claimed it. 00:09:46.074 [2024-12-06 14:03:34.531318] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2602410) - No such process 00:09:46.655 ERROR: process (pid: 2602410) is no longer running 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2602374 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2602374 00:09:46.655 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:46.915 lslocks: write error 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2602374 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2602374 ']' 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2602374 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602374 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602374' 00:09:46.915 killing process with pid 2602374 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2602374 00:09:46.915 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2602374 00:09:47.174 00:09:47.174 real 0m2.191s 00:09:47.174 user 0m2.496s 00:09:47.174 sys 0m0.605s 00:09:47.174 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.174 14:03:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.174 ************************************ 00:09:47.174 END TEST locking_app_on_locked_coremask 00:09:47.174 ************************************ 00:09:47.174 14:03:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:47.174 14:03:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.174 14:03:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.174 14:03:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:47.174 ************************************ 00:09:47.174 START TEST locking_overlapped_coremask 00:09:47.174 ************************************ 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2602757 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2602757 /var/tmp/spdk.sock 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2602757 ']' 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.174 14:03:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:47.434 [2024-12-06 14:03:35.855051] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:47.434 [2024-12-06 14:03:35.855103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602757 ] 00:09:47.434 [2024-12-06 14:03:35.938998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.434 [2024-12-06 14:03:35.972989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.434 [2024-12-06 14:03:35.973136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.434 [2024-12-06 14:03:35.973138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2603065 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2603065 /var/tmp/spdk2.sock 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2603065 /var/tmp/spdk2.sock 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.004 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2603065 /var/tmp/spdk2.sock 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2603065 ']' 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.264 14:03:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:48.264 [2024-12-06 14:03:36.692721] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:48.264 [2024-12-06 14:03:36.692775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603065 ] 00:09:48.264 [2024-12-06 14:03:36.804762] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2602757 has claimed it. 00:09:48.264 [2024-12-06 14:03:36.804800] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:48.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2603065) - No such process 00:09:48.834 ERROR: process (pid: 2603065) is no longer running 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:48.834 14:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2602757 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2602757 ']' 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2602757 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602757 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602757' 00:09:48.835 killing process with pid 2602757 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2602757 00:09:48.835 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2602757 00:09:49.095 00:09:49.095 real 0m1.766s 00:09:49.095 user 0m5.100s 00:09:49.095 sys 0m0.393s 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.095 ************************************ 00:09:49.095 END TEST locking_overlapped_coremask 00:09:49.095 ************************************ 00:09:49.095 14:03:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:49.095 14:03:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.095 14:03:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.095 14:03:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.095 ************************************ 00:09:49.095 START TEST locking_overlapped_coremask_via_rpc 00:09:49.095 ************************************ 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2603136 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2603136 /var/tmp/spdk.sock 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2603136 ']' 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.095 14:03:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.095 [2024-12-06 14:03:37.697557] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:49.095 [2024-12-06 14:03:37.697611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603136 ] 00:09:49.356 [2024-12-06 14:03:37.774757] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:49.356 [2024-12-06 14:03:37.774787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.356 [2024-12-06 14:03:37.812439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.356 [2024-12-06 14:03:37.812572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.356 [2024-12-06 14:03:37.812572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2603461 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2603461 /var/tmp/spdk2.sock 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2603461 ']' 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.927 14:03:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.927 [2024-12-06 14:03:38.550259] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:49.927 [2024-12-06 14:03:38.550318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603461 ] 00:09:50.189 [2024-12-06 14:03:38.661907] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:50.189 [2024-12-06 14:03:38.661939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.189 [2024-12-06 14:03:38.739536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.189 [2024-12-06 14:03:38.739694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.189 [2024-12-06 14:03:38.739695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.763 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.764 [2024-12-06 14:03:39.364531] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2603136 has claimed it. 00:09:50.764 request: 00:09:50.764 { 00:09:50.764 "method": "framework_enable_cpumask_locks", 00:09:50.764 "req_id": 1 00:09:50.764 } 00:09:50.764 Got JSON-RPC error response 00:09:50.764 response: 00:09:50.764 { 00:09:50.764 "code": -32603, 00:09:50.764 "message": "Failed to claim CPU core: 2" 00:09:50.764 } 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2603136 /var/tmp/spdk.sock 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2603136 ']' 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.764 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2603461 /var/tmp/spdk2.sock 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2603461 ']' 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.025 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:51.287 00:09:51.287 real 0m2.098s 00:09:51.287 user 0m0.863s 00:09:51.287 sys 0m0.161s 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.287 14:03:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.287 ************************************ 00:09:51.287 END TEST locking_overlapped_coremask_via_rpc 00:09:51.287 ************************************ 00:09:51.287 14:03:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:51.287 14:03:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2603136 ]] 00:09:51.287 14:03:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2603136 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2603136 ']' 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2603136 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2603136 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2603136' 00:09:51.287 killing process with pid 2603136 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2603136 00:09:51.287 14:03:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2603136 00:09:51.548 14:03:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2603461 ]] 00:09:51.548 14:03:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2603461 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2603461 ']' 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2603461 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2603461 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2603461' 00:09:51.548 killing process with pid 2603461 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2603461 00:09:51.548 14:03:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2603461 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2603136 ]] 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2603136 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2603136 ']' 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2603136 00:09:51.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2603136) - No such process 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2603136 is not found' 00:09:51.820 Process with pid 2603136 is not found 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2603461 ]] 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2603461 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2603461 ']' 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2603461 00:09:51.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2603461) - No such process 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2603461 is not found' 00:09:51.820 Process with pid 2603461 is not found 00:09:51.820 14:03:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:51.820 00:09:51.820 real 0m15.460s 00:09:51.820 user 0m27.609s 00:09:51.820 sys 0m4.646s 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.820 14:03:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:51.820 ************************************ 00:09:51.820 END TEST cpu_locks 00:09:51.820 ************************************ 00:09:51.820 00:09:51.820 real 0m41.141s 00:09:51.820 user 1m21.485s 00:09:51.820 sys 0m8.033s 00:09:51.820 14:03:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.820 14:03:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:51.820 ************************************ 00:09:51.820 END TEST event 00:09:51.820 ************************************ 00:09:51.820 14:03:40 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:51.820 14:03:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.820 14:03:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.820 14:03:40 -- common/autotest_common.sh@10 -- # set +x 00:09:51.820 ************************************ 00:09:51.820 START TEST thread 00:09:51.820 ************************************ 00:09:51.820 14:03:40 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:52.080 * Looking for test storage... 00:09:52.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.080 14:03:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.080 14:03:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.080 14:03:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.080 14:03:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.080 14:03:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.080 14:03:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.080 14:03:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.080 14:03:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.080 14:03:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.080 14:03:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.080 14:03:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.080 14:03:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:52.080 14:03:40 thread -- scripts/common.sh@345 -- # : 1 00:09:52.080 14:03:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.080 14:03:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.080 14:03:40 thread -- scripts/common.sh@365 -- # decimal 1 00:09:52.080 14:03:40 thread -- scripts/common.sh@353 -- # local d=1 00:09:52.080 14:03:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.080 14:03:40 thread -- scripts/common.sh@355 -- # echo 1 00:09:52.080 14:03:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.080 14:03:40 thread -- scripts/common.sh@366 -- # decimal 2 00:09:52.080 14:03:40 thread -- scripts/common.sh@353 -- # local d=2 00:09:52.080 14:03:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.080 14:03:40 thread -- scripts/common.sh@355 -- # echo 2 00:09:52.080 14:03:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.080 14:03:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.080 14:03:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.080 14:03:40 thread -- scripts/common.sh@368 -- # return 0 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.080 --rc genhtml_branch_coverage=1 00:09:52.080 --rc genhtml_function_coverage=1 00:09:52.080 --rc genhtml_legend=1 00:09:52.080 --rc geninfo_all_blocks=1 00:09:52.080 --rc geninfo_unexecuted_blocks=1 00:09:52.080 00:09:52.080 ' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.080 --rc genhtml_branch_coverage=1 00:09:52.080 --rc genhtml_function_coverage=1 00:09:52.080 --rc genhtml_legend=1 00:09:52.080 --rc geninfo_all_blocks=1 00:09:52.080 --rc geninfo_unexecuted_blocks=1 00:09:52.080 00:09:52.080 ' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.080 --rc genhtml_branch_coverage=1 00:09:52.080 --rc genhtml_function_coverage=1 00:09:52.080 --rc genhtml_legend=1 00:09:52.080 --rc geninfo_all_blocks=1 00:09:52.080 --rc geninfo_unexecuted_blocks=1 00:09:52.080 00:09:52.080 ' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.080 --rc genhtml_branch_coverage=1 00:09:52.080 --rc genhtml_function_coverage=1 00:09:52.080 --rc genhtml_legend=1 00:09:52.080 --rc geninfo_all_blocks=1 00:09:52.080 --rc geninfo_unexecuted_blocks=1 00:09:52.080 00:09:52.080 ' 00:09:52.080 14:03:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.080 14:03:40 thread -- common/autotest_common.sh@10 -- # set +x 00:09:52.080 ************************************ 00:09:52.080 START TEST thread_poller_perf 00:09:52.080 ************************************ 00:09:52.080 14:03:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:52.080 [2024-12-06 14:03:40.690850] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:52.080 [2024-12-06 14:03:40.690946] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603915 ] 00:09:52.341 [2024-12-06 14:03:40.780178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.341 [2024-12-06 14:03:40.811868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.341 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:53.282 [2024-12-06T13:03:41.922Z] ====================================== 00:09:53.282 [2024-12-06T13:03:41.922Z] busy:2408028780 (cyc) 00:09:53.282 [2024-12-06T13:03:41.922Z] total_run_count: 419000 00:09:53.282 [2024-12-06T13:03:41.922Z] tsc_hz: 2400000000 (cyc) 00:09:53.282 [2024-12-06T13:03:41.922Z] ====================================== 00:09:53.282 [2024-12-06T13:03:41.922Z] poller_cost: 5747 (cyc), 2394 (nsec) 00:09:53.282 00:09:53.282 real 0m1.176s 00:09:53.282 user 0m1.098s 00:09:53.282 sys 0m0.074s 00:09:53.282 14:03:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.282 14:03:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:53.282 ************************************ 00:09:53.282 END TEST thread_poller_perf 00:09:53.283 ************************************ 00:09:53.283 14:03:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.283 14:03:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:53.283 14:03:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.283 14:03:41 thread -- common/autotest_common.sh@10 -- # set +x 00:09:53.542 ************************************ 00:09:53.542 START TEST thread_poller_perf 00:09:53.542 ************************************ 00:09:53.542 14:03:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.542 [2024-12-06 14:03:41.945965] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:53.542 [2024-12-06 14:03:41.946055] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604266 ] 00:09:53.542 [2024-12-06 14:03:42.034151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.542 [2024-12-06 14:03:42.065379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.542 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:54.500 [2024-12-06T13:03:43.140Z] ====================================== 00:09:54.500 [2024-12-06T13:03:43.140Z] busy:2401265294 (cyc) 00:09:54.500 [2024-12-06T13:03:43.140Z] total_run_count: 5100000 00:09:54.500 [2024-12-06T13:03:43.140Z] tsc_hz: 2400000000 (cyc) 00:09:54.500 [2024-12-06T13:03:43.140Z] ====================================== 00:09:54.500 [2024-12-06T13:03:43.140Z] poller_cost: 470 (cyc), 195 (nsec) 00:09:54.500 00:09:54.500 real 0m1.168s 00:09:54.500 user 0m1.086s 00:09:54.500 sys 0m0.079s 00:09:54.500 14:03:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.500 14:03:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:54.500 ************************************ 00:09:54.500 END TEST thread_poller_perf 00:09:54.500 ************************************ 00:09:54.500 14:03:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:54.500 00:09:54.500 real 0m2.704s 00:09:54.500 user 0m2.357s 00:09:54.500 sys 0m0.362s 00:09:54.500 14:03:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.500 14:03:43 thread -- common/autotest_common.sh@10 -- # set +x 00:09:54.500 ************************************ 00:09:54.500 END TEST thread 00:09:54.500 ************************************ 00:09:54.761 14:03:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:54.761 14:03:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:54.761 14:03:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.761 14:03:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.761 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:09:54.761 ************************************ 00:09:54.761 START TEST app_cmdline 00:09:54.761 ************************************ 00:09:54.761 14:03:43 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:54.761 * Looking for test storage... 00:09:54.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:54.761 14:03:43 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.761 14:03:43 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.761 14:03:43 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.761 14:03:43 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.761 14:03:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.022 14:03:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.022 --rc genhtml_branch_coverage=1 00:09:55.022 --rc genhtml_function_coverage=1 00:09:55.022 --rc genhtml_legend=1 00:09:55.022 --rc geninfo_all_blocks=1 00:09:55.022 --rc geninfo_unexecuted_blocks=1 00:09:55.022 00:09:55.022 ' 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.022 --rc genhtml_branch_coverage=1 00:09:55.022 --rc genhtml_function_coverage=1 00:09:55.022 --rc genhtml_legend=1 00:09:55.022 --rc geninfo_all_blocks=1 00:09:55.022 --rc geninfo_unexecuted_blocks=1 00:09:55.022 00:09:55.022 ' 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.022 --rc genhtml_branch_coverage=1 00:09:55.022 --rc genhtml_function_coverage=1 00:09:55.022 --rc genhtml_legend=1 00:09:55.022 --rc geninfo_all_blocks=1 00:09:55.022 --rc geninfo_unexecuted_blocks=1 00:09:55.022 00:09:55.022 ' 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.022 --rc genhtml_branch_coverage=1 00:09:55.022 --rc genhtml_function_coverage=1 00:09:55.022 --rc genhtml_legend=1 00:09:55.022 --rc geninfo_all_blocks=1 00:09:55.022 --rc geninfo_unexecuted_blocks=1 00:09:55.022 00:09:55.022 ' 00:09:55.022 14:03:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:55.022 14:03:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2604605 00:09:55.022 14:03:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2604605 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2604605 ']' 00:09:55.022 14:03:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.022 14:03:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.022 [2024-12-06 14:03:43.472404] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:09:55.022 [2024-12-06 14:03:43.472490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604605 ] 00:09:55.022 [2024-12-06 14:03:43.559106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.022 [2024-12-06 14:03:43.593075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:55.960 { 00:09:55.960 "version": "SPDK v25.01-pre git sha1 6696ebaae", 00:09:55.960 "fields": { 00:09:55.960 "major": 25, 00:09:55.960 "minor": 1, 00:09:55.960 "patch": 0, 00:09:55.960 "suffix": "-pre", 00:09:55.960 "commit": "6696ebaae" 00:09:55.960 } 00:09:55.960 } 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:55.960 14:03:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:55.960 14:03:44 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:56.220 request: 00:09:56.220 { 00:09:56.220 "method": "env_dpdk_get_mem_stats", 00:09:56.220 "req_id": 1 00:09:56.220 } 00:09:56.220 Got JSON-RPC error response 00:09:56.220 response: 00:09:56.220 { 00:09:56.220 "code": -32601, 00:09:56.220 "message": "Method not found" 00:09:56.220 } 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.220 14:03:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2604605 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2604605 ']' 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2604605 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2604605 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2604605' 00:09:56.220 killing process with pid 2604605 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 2604605 00:09:56.220 14:03:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 2604605 00:09:56.484 00:09:56.484 real 0m1.706s 00:09:56.484 user 0m2.025s 00:09:56.484 sys 0m0.466s 00:09:56.484 14:03:44 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.484 14:03:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 ************************************ 00:09:56.484 END TEST app_cmdline 00:09:56.484 ************************************ 00:09:56.484 14:03:44 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:56.484 14:03:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.484 14:03:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.484 14:03:44 -- common/autotest_common.sh@10 -- # set +x 00:09:56.484 ************************************ 00:09:56.484 START TEST version 00:09:56.484 ************************************ 00:09:56.484 14:03:44 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:56.484 * Looking for test storage... 00:09:56.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:56.484 14:03:45 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.484 14:03:45 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.484 14:03:45 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.745 14:03:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.745 14:03:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.745 14:03:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.745 14:03:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.745 14:03:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.745 14:03:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.745 14:03:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.745 14:03:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.745 14:03:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.745 14:03:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.745 14:03:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.745 14:03:45 version -- scripts/common.sh@344 -- # case "$op" in 00:09:56.745 14:03:45 version -- scripts/common.sh@345 -- # : 1 00:09:56.745 14:03:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.745 14:03:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.745 14:03:45 version -- scripts/common.sh@365 -- # decimal 1 00:09:56.745 14:03:45 version -- scripts/common.sh@353 -- # local d=1 00:09:56.745 14:03:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.745 14:03:45 version -- scripts/common.sh@355 -- # echo 1 00:09:56.745 14:03:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.745 14:03:45 version -- scripts/common.sh@366 -- # decimal 2 00:09:56.745 14:03:45 version -- scripts/common.sh@353 -- # local d=2 00:09:56.745 14:03:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.745 14:03:45 version -- scripts/common.sh@355 -- # echo 2 00:09:56.745 14:03:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.745 14:03:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.745 14:03:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.745 14:03:45 version -- scripts/common.sh@368 -- # return 0 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.745 --rc genhtml_branch_coverage=1 00:09:56.745 --rc genhtml_function_coverage=1 00:09:56.745 --rc genhtml_legend=1 00:09:56.745 --rc geninfo_all_blocks=1 00:09:56.745 --rc geninfo_unexecuted_blocks=1 00:09:56.745 00:09:56.745 ' 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.745 --rc genhtml_branch_coverage=1 00:09:56.745 --rc genhtml_function_coverage=1 00:09:56.745 --rc genhtml_legend=1 00:09:56.745 --rc geninfo_all_blocks=1 00:09:56.745 --rc geninfo_unexecuted_blocks=1 00:09:56.745 00:09:56.745 ' 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.745 --rc genhtml_branch_coverage=1 00:09:56.745 --rc genhtml_function_coverage=1 00:09:56.745 --rc genhtml_legend=1 00:09:56.745 --rc geninfo_all_blocks=1 00:09:56.745 --rc geninfo_unexecuted_blocks=1 00:09:56.745 00:09:56.745 ' 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.745 --rc genhtml_branch_coverage=1 00:09:56.745 --rc genhtml_function_coverage=1 00:09:56.745 --rc genhtml_legend=1 00:09:56.745 --rc geninfo_all_blocks=1 00:09:56.745 --rc geninfo_unexecuted_blocks=1 00:09:56.745 00:09:56.745 ' 00:09:56.745 14:03:45 version -- app/version.sh@17 -- # get_header_version major 00:09:56.745 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.745 14:03:45 version -- app/version.sh@17 -- # major=25 00:09:56.745 14:03:45 version -- app/version.sh@18 -- # get_header_version minor 00:09:56.745 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.745 14:03:45 version -- app/version.sh@18 -- # minor=1 00:09:56.745 14:03:45 version -- app/version.sh@19 -- # get_header_version patch 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:09:56.745 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.745 14:03:45 version -- app/version.sh@19 -- # patch=0 00:09:56.745 14:03:45 version -- app/version.sh@20 -- # get_header_version suffix 00:09:56.745 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:09:56.745 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:56.745 14:03:45 version -- app/version.sh@20 -- # suffix=-pre 00:09:56.745 14:03:45 version -- app/version.sh@22 -- # version=25.1 00:09:56.745 14:03:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:56.745 14:03:45 version -- app/version.sh@28 -- # version=25.1rc0 00:09:56.745 14:03:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:56.745 14:03:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:56.745 14:03:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:56.745 14:03:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:56.745 00:09:56.745 real 0m0.290s 00:09:56.745 user 0m0.175s 00:09:56.745 sys 0m0.163s 00:09:56.745 14:03:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.745 14:03:45 version -- common/autotest_common.sh@10 -- # set +x 00:09:56.745 ************************************ 00:09:56.745 END TEST version 00:09:56.745 ************************************ 00:09:56.745 14:03:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:56.745 14:03:45 -- spdk/autotest.sh@194 -- # uname -s 00:09:56.745 14:03:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:56.745 14:03:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:56.745 14:03:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:56.745 14:03:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:56.745 14:03:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.745 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:09:56.745 14:03:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:56.745 14:03:45 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:56.745 14:03:45 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:56.745 14:03:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.745 14:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.745 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:09:57.005 ************************************ 00:09:57.005 START TEST nvmf_tcp 00:09:57.005 ************************************ 00:09:57.005 14:03:45 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:57.005 * Looking for test storage... 00:09:57.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:57.005 14:03:45 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.005 14:03:45 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.005 14:03:45 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.005 14:03:45 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:57.005 14:03:45 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.006 14:03:45 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.006 --rc genhtml_branch_coverage=1 00:09:57.006 --rc genhtml_function_coverage=1 00:09:57.006 --rc genhtml_legend=1 00:09:57.006 --rc geninfo_all_blocks=1 00:09:57.006 --rc geninfo_unexecuted_blocks=1 00:09:57.006 00:09:57.006 ' 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.006 --rc genhtml_branch_coverage=1 00:09:57.006 --rc genhtml_function_coverage=1 00:09:57.006 --rc genhtml_legend=1 00:09:57.006 --rc geninfo_all_blocks=1 00:09:57.006 --rc geninfo_unexecuted_blocks=1 00:09:57.006 00:09:57.006 ' 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.006 --rc genhtml_branch_coverage=1 00:09:57.006 --rc genhtml_function_coverage=1 00:09:57.006 --rc genhtml_legend=1 00:09:57.006 --rc geninfo_all_blocks=1 00:09:57.006 --rc geninfo_unexecuted_blocks=1 00:09:57.006 00:09:57.006 ' 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.006 --rc genhtml_branch_coverage=1 00:09:57.006 --rc genhtml_function_coverage=1 00:09:57.006 --rc genhtml_legend=1 00:09:57.006 --rc geninfo_all_blocks=1 00:09:57.006 --rc geninfo_unexecuted_blocks=1 00:09:57.006 00:09:57.006 ' 00:09:57.006 14:03:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:57.006 14:03:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:57.006 14:03:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.006 14:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.266 ************************************ 00:09:57.266 START TEST nvmf_target_core 00:09:57.266 ************************************ 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:57.266 * Looking for test storage... 00:09:57.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.266 --rc genhtml_branch_coverage=1 00:09:57.266 --rc genhtml_function_coverage=1 00:09:57.266 --rc genhtml_legend=1 00:09:57.266 --rc geninfo_all_blocks=1 00:09:57.266 --rc geninfo_unexecuted_blocks=1 00:09:57.266 00:09:57.266 ' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.266 --rc genhtml_branch_coverage=1 00:09:57.266 --rc genhtml_function_coverage=1 00:09:57.266 --rc genhtml_legend=1 00:09:57.266 --rc geninfo_all_blocks=1 00:09:57.266 --rc geninfo_unexecuted_blocks=1 00:09:57.266 00:09:57.266 ' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.266 --rc genhtml_branch_coverage=1 00:09:57.266 --rc genhtml_function_coverage=1 00:09:57.266 --rc genhtml_legend=1 00:09:57.266 --rc geninfo_all_blocks=1 00:09:57.266 --rc geninfo_unexecuted_blocks=1 00:09:57.266 00:09:57.266 ' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.266 --rc genhtml_branch_coverage=1 00:09:57.266 --rc genhtml_function_coverage=1 00:09:57.266 --rc genhtml_legend=1 00:09:57.266 --rc geninfo_all_blocks=1 00:09:57.266 --rc geninfo_unexecuted_blocks=1 00:09:57.266 00:09:57.266 ' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.266 14:03:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.267 14:03:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.528 ************************************ 00:09:57.528 START TEST nvmf_abort 00:09:57.528 ************************************ 00:09:57.528 14:03:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:57.528 * Looking for test storage... 00:09:57.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.528 --rc genhtml_branch_coverage=1 00:09:57.528 --rc genhtml_function_coverage=1 00:09:57.528 --rc genhtml_legend=1 00:09:57.528 --rc geninfo_all_blocks=1 00:09:57.528 --rc geninfo_unexecuted_blocks=1 00:09:57.528 00:09:57.528 ' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.528 --rc genhtml_branch_coverage=1 00:09:57.528 --rc genhtml_function_coverage=1 00:09:57.528 --rc genhtml_legend=1 00:09:57.528 --rc geninfo_all_blocks=1 00:09:57.528 --rc geninfo_unexecuted_blocks=1 00:09:57.528 00:09:57.528 ' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.528 --rc genhtml_branch_coverage=1 00:09:57.528 --rc genhtml_function_coverage=1 00:09:57.528 --rc genhtml_legend=1 00:09:57.528 --rc geninfo_all_blocks=1 00:09:57.528 --rc geninfo_unexecuted_blocks=1 00:09:57.528 00:09:57.528 ' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.528 --rc genhtml_branch_coverage=1 00:09:57.528 --rc genhtml_function_coverage=1 00:09:57.528 --rc genhtml_legend=1 00:09:57.528 --rc geninfo_all_blocks=1 00:09:57.528 --rc geninfo_unexecuted_blocks=1 00:09:57.528 00:09:57.528 ' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.528 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.788 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.788 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.788 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.788 14:03:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.920 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:05.921 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:05.921 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:05.921 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:05.921 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:10:05.921 00:10:05.921 --- 10.0.0.2 ping statistics --- 00:10:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.921 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:05.921 00:10:05.921 --- 10.0.0.1 ping statistics --- 00:10:05.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.921 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2608960 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2608960 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2608960 ']' 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.921 14:03:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:05.921 [2024-12-06 14:03:53.725194] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:05.922 [2024-12-06 14:03:53.725263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.922 [2024-12-06 14:03:53.827055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.922 [2024-12-06 14:03:53.882932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.922 [2024-12-06 14:03:53.882985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.922 [2024-12-06 14:03:53.882993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.922 [2024-12-06 14:03:53.883000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.922 [2024-12-06 14:03:53.883007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.922 [2024-12-06 14:03:53.885103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.922 [2024-12-06 14:03:53.885264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.922 [2024-12-06 14:03:53.885266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.922 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.922 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:05.922 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.922 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.922 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.183 [2024-12-06 14:03:54.602794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.183 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 Malloc0 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 Delay0 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 [2024-12-06 14:03:54.684800] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.184 14:03:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:06.447 [2024-12-06 14:03:54.876582] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:08.359 Initializing NVMe Controllers 00:10:08.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:08.359 controller IO queue size 128 less than required 00:10:08.359 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:08.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:08.359 Initialization complete. Launching workers. 00:10:08.359 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28311 00:10:08.359 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28372, failed to submit 62 00:10:08.359 success 28315, unsuccessful 57, failed 0 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.359 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:08.360 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.360 14:03:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.360 rmmod nvme_tcp 00:10:08.621 rmmod nvme_fabrics 00:10:08.621 rmmod nvme_keyring 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2608960 ']' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2608960 ']' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2608960' 00:10:08.621 killing process with pid 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2608960 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.621 14:03:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.188 00:10:11.188 real 0m13.392s 00:10:11.188 user 0m14.040s 00:10:11.188 sys 0m6.625s 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:11.188 ************************************ 00:10:11.188 END TEST nvmf_abort 00:10:11.188 ************************************ 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.188 ************************************ 00:10:11.188 START TEST nvmf_ns_hotplug_stress 00:10:11.188 ************************************ 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:11.188 * Looking for test storage... 00:10:11.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.188 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.189 --rc genhtml_branch_coverage=1 00:10:11.189 --rc genhtml_function_coverage=1 00:10:11.189 --rc genhtml_legend=1 00:10:11.189 --rc geninfo_all_blocks=1 00:10:11.189 --rc geninfo_unexecuted_blocks=1 00:10:11.189 00:10:11.189 ' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.189 --rc genhtml_branch_coverage=1 00:10:11.189 --rc genhtml_function_coverage=1 00:10:11.189 --rc genhtml_legend=1 00:10:11.189 --rc geninfo_all_blocks=1 00:10:11.189 --rc geninfo_unexecuted_blocks=1 00:10:11.189 00:10:11.189 ' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.189 --rc genhtml_branch_coverage=1 00:10:11.189 --rc genhtml_function_coverage=1 00:10:11.189 --rc genhtml_legend=1 00:10:11.189 --rc geninfo_all_blocks=1 00:10:11.189 --rc geninfo_unexecuted_blocks=1 00:10:11.189 00:10:11.189 ' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.189 --rc genhtml_branch_coverage=1 00:10:11.189 --rc genhtml_function_coverage=1 00:10:11.189 --rc genhtml_legend=1 00:10:11.189 --rc geninfo_all_blocks=1 00:10:11.189 --rc geninfo_unexecuted_blocks=1 00:10:11.189 00:10:11.189 ' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.189 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.190 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.190 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.190 14:03:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.331 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:19.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:19.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:19.332 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:19.332 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.332 14:04:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:10:19.332 00:10:19.332 --- 10.0.0.2 ping statistics --- 00:10:19.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.332 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:10:19.332 00:10:19.332 --- 10.0.0.1 ping statistics --- 00:10:19.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.332 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.332 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2613875 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2613875 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2613875 ']' 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.333 14:04:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.333 [2024-12-06 14:04:07.192493] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:10:19.333 [2024-12-06 14:04:07.192561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.333 [2024-12-06 14:04:07.292547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.333 [2024-12-06 14:04:07.345364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.333 [2024-12-06 14:04:07.345415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.333 [2024-12-06 14:04:07.345425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.333 [2024-12-06 14:04:07.345433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.333 [2024-12-06 14:04:07.345439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.333 [2024-12-06 14:04:07.347301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.333 [2024-12-06 14:04:07.347463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.333 [2024-12-06 14:04:07.347477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.594 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.594 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:19.594 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.594 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.595 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.595 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.595 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:19.595 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.856 [2024-12-06 14:04:08.233433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.856 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.856 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.116 [2024-12-06 14:04:08.632368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.116 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:20.377 14:04:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:20.638 Malloc0 00:10:20.638 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:20.638 Delay0 00:10:20.638 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.900 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:21.161 NULL1 00:10:21.161 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:21.422 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2614574 00:10:21.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:21.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:21.423 14:04:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.423 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.683 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:21.683 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:21.944 true 00:10:21.944 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:21.944 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.944 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.206 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:22.206 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:22.467 true 00:10:22.468 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:22.468 14:04:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.729 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.729 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:22.729 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:22.991 true 00:10:22.991 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:22.991 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.252 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.252 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:23.252 14:04:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:23.513 true 00:10:23.513 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:23.513 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.773 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.773 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:23.773 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:24.034 true 00:10:24.034 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:24.034 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.293 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.293 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:24.293 14:04:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:24.553 true 00:10:24.553 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:24.553 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.812 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.812 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:24.812 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:25.071 true 00:10:25.071 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:25.071 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.332 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.332 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:25.332 14:04:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:25.655 true 00:10:25.655 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:25.655 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.932 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.932 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:25.932 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:26.192 true 00:10:26.192 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:26.192 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.452 14:04:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.452 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:26.452 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:26.711 true 00:10:26.711 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:26.711 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.969 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.969 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:26.969 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:27.227 true 00:10:27.227 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:27.227 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.486 14:04:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.486 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:27.486 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:27.746 true 00:10:27.746 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:27.746 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.007 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.267 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:28.267 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:28.267 true 00:10:28.267 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:28.267 14:04:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.528 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.788 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:28.788 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:28.788 true 00:10:29.049 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:29.049 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.049 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.309 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:29.309 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:29.309 true 00:10:29.570 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:29.570 14:04:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.570 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.830 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:29.830 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:30.089 true 00:10:30.089 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:30.089 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.089 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.349 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:30.349 14:04:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:30.610 true 00:10:30.610 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:30.610 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.963 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.963 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:30.963 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:30.963 true 00:10:31.224 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:31.224 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.224 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.485 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:31.485 14:04:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:31.746 true 00:10:31.746 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:31.746 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.746 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.007 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:32.007 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:32.268 true 00:10:32.268 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:32.268 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.268 14:04:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.530 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:32.530 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:32.792 true 00:10:32.792 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:32.792 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.052 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.052 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:33.052 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:33.313 true 00:10:33.313 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:33.313 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.596 14:04:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.596 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:33.596 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:33.858 true 00:10:33.858 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:33.858 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.119 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.119 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:34.119 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:34.379 true 00:10:34.379 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:34.379 14:04:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.641 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.641 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:34.641 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:34.902 true 00:10:34.902 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:34.902 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.162 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.162 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:35.162 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:35.423 true 00:10:35.423 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:35.423 14:04:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.684 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.945 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:35.945 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:35.945 true 00:10:35.945 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:35.945 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.207 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.467 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:36.467 14:04:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:36.467 true 00:10:36.467 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:36.467 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.728 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.988 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:36.988 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:36.988 true 00:10:36.988 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:36.988 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.248 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.508 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:37.508 14:04:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:37.508 true 00:10:37.767 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:37.767 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.767 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.075 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:38.075 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:38.075 true 00:10:38.075 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:38.075 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.334 14:04:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.593 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:38.593 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:38.593 true 00:10:38.854 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:38.854 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.854 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.113 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:39.113 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:39.373 true 00:10:39.373 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:39.373 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.373 14:04:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.633 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:39.633 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:39.892 true 00:10:39.892 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:39.892 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.151 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.151 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:40.151 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:40.410 true 00:10:40.410 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:40.410 14:04:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.670 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.670 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:40.670 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:40.931 true 00:10:40.931 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:40.931 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.192 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.453 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:41.453 14:04:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:41.453 true 00:10:41.453 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:41.453 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.714 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.974 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:41.974 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:41.974 true 00:10:41.974 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:41.974 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.235 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.497 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:42.497 14:04:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:42.497 true 00:10:42.758 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:42.758 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.758 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.019 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:43.019 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:43.280 true 00:10:43.280 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:43.280 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.280 14:04:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.540 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:43.540 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:43.800 true 00:10:43.800 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:43.800 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.061 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.061 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:44.061 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:44.322 true 00:10:44.322 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:44.322 14:04:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.582 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.582 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:44.582 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:44.843 true 00:10:44.843 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:44.843 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.103 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.364 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:45.364 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:45.364 true 00:10:45.364 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:45.364 14:04:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.624 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.884 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:45.884 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:45.884 true 00:10:45.884 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:45.884 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.145 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.406 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:46.406 14:04:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:46.666 true 00:10:46.666 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:46.666 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.666 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.926 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:46.926 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:47.186 true 00:10:47.186 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:47.186 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.186 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.446 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:47.446 14:04:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:47.706 true 00:10:47.706 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:47.706 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.966 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.966 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:47.966 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:48.225 true 00:10:48.225 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:48.225 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.484 14:04:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.484 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:48.484 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:48.743 true 00:10:48.743 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:48.743 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.003 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.262 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:49.262 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:49.262 true 00:10:49.262 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:49.262 14:04:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.521 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.780 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:49.780 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:49.780 true 00:10:49.780 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:49.780 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.039 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.297 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:50.297 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:50.556 true 00:10:50.556 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:50.556 14:04:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.556 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.815 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:50.815 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:51.075 true 00:10:51.075 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:51.075 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.075 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.334 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:51.334 14:04:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:51.594 true 00:10:51.594 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:51.594 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.594 Initializing NVMe Controllers 00:10:51.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.594 Controller IO queue size 128, less than required. 00:10:51.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:51.594 Initialization complete. Launching workers. 00:10:51.594 ======================================================== 00:10:51.594 Latency(us) 00:10:51.594 Device Information : IOPS MiB/s Average min max 00:10:51.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31207.77 15.24 4101.35 1100.20 11118.52 00:10:51.594 ======================================================== 00:10:51.594 Total : 31207.77 15.24 4101.35 1100.20 11118.52 00:10:51.594 00:10:51.854 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.854 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:51.854 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:52.114 true 00:10:52.114 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2614574 00:10:52.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2614574) - No such process 00:10:52.114 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2614574 00:10:52.114 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.374 14:04:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:52.374 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:52.374 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:52.374 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:52.374 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.634 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:52.634 null0 00:10:52.634 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:52.634 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.634 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:52.894 null1 00:10:52.894 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:52.894 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:52.894 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:52.894 null2 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:53.155 null3 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.155 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:53.415 null4 00:10:53.415 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.415 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.415 14:04:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:53.675 null5 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:53.676 null6 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.676 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:53.977 null7 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:53.977 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2621136 2621137 2621139 2621141 2621143 2621145 2621147 2621149 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:53.978 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.239 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.499 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:54.500 14:04:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.500 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.500 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:54.500 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:54.500 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.500 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:54.761 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.021 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.022 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.282 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.545 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.806 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.807 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.067 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.068 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.328 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.589 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.590 14:04:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.590 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.851 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.112 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.372 14:04:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.632 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.892 rmmod nvme_tcp 00:10:57.892 rmmod nvme_fabrics 00:10:57.892 rmmod nvme_keyring 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.892 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2613875 ']' 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2613875 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2613875 ']' 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2613875 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613875 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613875' 00:10:57.893 killing process with pid 2613875 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2613875 00:10:57.893 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2613875 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.153 14:04:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.064 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.064 00:11:00.064 real 0m49.289s 00:11:00.064 user 3m20.813s 00:11:00.064 sys 0m17.714s 00:11:00.064 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.064 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.064 ************************************ 00:11:00.064 END TEST nvmf_ns_hotplug_stress 00:11:00.065 ************************************ 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.324 ************************************ 00:11:00.324 START TEST nvmf_delete_subsystem 00:11:00.324 ************************************ 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:00.324 * Looking for test storage... 00:11:00.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:00.324 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.583 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:00.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.583 --rc genhtml_branch_coverage=1 00:11:00.583 --rc genhtml_function_coverage=1 00:11:00.583 --rc genhtml_legend=1 00:11:00.583 --rc geninfo_all_blocks=1 00:11:00.583 --rc geninfo_unexecuted_blocks=1 00:11:00.583 00:11:00.583 ' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.584 --rc genhtml_branch_coverage=1 00:11:00.584 --rc genhtml_function_coverage=1 00:11:00.584 --rc genhtml_legend=1 00:11:00.584 --rc geninfo_all_blocks=1 00:11:00.584 --rc geninfo_unexecuted_blocks=1 00:11:00.584 00:11:00.584 ' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.584 --rc genhtml_branch_coverage=1 00:11:00.584 --rc genhtml_function_coverage=1 00:11:00.584 --rc genhtml_legend=1 00:11:00.584 --rc geninfo_all_blocks=1 00:11:00.584 --rc geninfo_unexecuted_blocks=1 00:11:00.584 00:11:00.584 ' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.584 --rc genhtml_branch_coverage=1 00:11:00.584 --rc genhtml_function_coverage=1 00:11:00.584 --rc genhtml_legend=1 00:11:00.584 --rc geninfo_all_blocks=1 00:11:00.584 --rc geninfo_unexecuted_blocks=1 00:11:00.584 00:11:00.584 ' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.584 14:04:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.584 14:04:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.716 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:11:08.717 00:11:08.717 --- 10.0.0.2 ping statistics --- 00:11:08.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.717 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:08.717 00:11:08.717 --- 10.0.0.1 ping statistics --- 00:11:08.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.717 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.717 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2626324 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2626324 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2626324 ']' 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.718 14:04:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.718 [2024-12-06 14:04:56.565547] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:08.718 [2024-12-06 14:04:56.565610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.718 [2024-12-06 14:04:56.665705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:08.718 [2024-12-06 14:04:56.718645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.718 [2024-12-06 14:04:56.718701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.718 [2024-12-06 14:04:56.718709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.718 [2024-12-06 14:04:56.718717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.718 [2024-12-06 14:04:56.718723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.718 [2024-12-06 14:04:56.720398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.718 [2024-12-06 14:04:56.720401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 [2024-12-06 14:04:57.445689] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 [2024-12-06 14:04:57.470040] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 NULL1 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.977 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.978 Delay0 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2626669 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:08.978 14:04:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:08.978 [2024-12-06 14:04:57.596990] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:10.900 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.900 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.900 14:04:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 [2024-12-06 14:04:59.722324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d92c0 is same with the state(6) to be set 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 [2024-12-06 14:04:59.723616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d9680 is same with the state(6) to be set 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 Write completed with error (sct=0, sc=8) 00:11:11.161 starting I/O failed: -6 00:11:11.161 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 starting I/O failed: -6 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 starting I/O failed: -6 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 [2024-12-06 14:04:59.727956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe61800d4b0 is same with the state(6) to be set 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Write completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:11.162 Read completed with error (sct=0, sc=8) 00:11:12.101 [2024-12-06 14:05:00.696773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17da9b0 is same with the state(6) to be set 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 [2024-12-06 14:05:00.726253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d94a0 is same with the state(6) to be set 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 [2024-12-06 14:05:00.726361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d9860 is same with the state(6) to be set 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Write completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.101 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 [2024-12-06 14:05:00.730542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe61800d7e0 is same with the state(6) to be set 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Write completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 Read completed with error (sct=0, sc=8) 00:11:12.102 [2024-12-06 14:05:00.730632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe61800d040 is same with the state(6) to be set 00:11:12.102 Initializing NVMe Controllers 00:11:12.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:12.102 Controller IO queue size 128, less than required. 00:11:12.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:12.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:12.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:12.102 Initialization complete. Launching workers. 00:11:12.102 ======================================================== 00:11:12.102 Latency(us) 00:11:12.102 Device Information : IOPS MiB/s Average min max 00:11:12.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.34 0.08 921883.34 576.87 1006975.51 00:11:12.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.82 0.08 947570.65 289.41 2003187.54 00:11:12.102 ======================================================== 00:11:12.102 Total : 323.16 0.16 934984.26 289.41 2003187.54 00:11:12.102 00:11:12.102 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.102 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:12.102 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2626669 00:11:12.102 14:05:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:12.102 [2024-12-06 14:05:00.732127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17da9b0 (9): Bad file descriptor 00:11:12.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2626669 00:11:12.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2626669) - No such process 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2626669 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2626669 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2626669 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.672 [2024-12-06 14:05:01.260447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2627353 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:12.672 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:12.931 [2024-12-06 14:05:01.369070] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:13.190 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:13.190 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:13.190 14:05:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:13.760 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:13.760 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:13.760 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:14.329 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:14.329 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:14.329 14:05:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:14.901 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:14.901 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:14.901 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:15.476 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:15.476 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:15.476 14:05:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:15.736 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:15.736 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:15.736 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:15.996 Initializing NVMe Controllers 00:11:15.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:15.996 Controller IO queue size 128, less than required. 00:11:15.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:15.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:15.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:15.996 Initialization complete. Launching workers. 00:11:15.996 ======================================================== 00:11:15.996 Latency(us) 00:11:15.996 Device Information : IOPS MiB/s Average min max 00:11:15.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002232.35 1000128.21 1042033.08 00:11:15.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004195.10 1000450.90 1043452.15 00:11:15.996 ======================================================== 00:11:15.996 Total : 256.00 0.12 1003213.73 1000128.21 1043452.15 00:11:15.996 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627353 00:11:16.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2627353) - No such process 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2627353 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.257 rmmod nvme_tcp 00:11:16.257 rmmod nvme_fabrics 00:11:16.257 rmmod nvme_keyring 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2626324 ']' 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2626324 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2626324 ']' 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2626324 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.257 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2626324 00:11:16.518 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.518 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.518 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2626324' 00:11:16.518 killing process with pid 2626324 00:11:16.518 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2626324 00:11:16.518 14:05:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2626324 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.518 14:05:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.058 00:11:19.058 real 0m18.370s 00:11:19.058 user 0m30.994s 00:11:19.058 sys 0m6.799s 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.058 ************************************ 00:11:19.058 END TEST nvmf_delete_subsystem 00:11:19.058 ************************************ 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.058 ************************************ 00:11:19.058 START TEST nvmf_host_management 00:11:19.058 ************************************ 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:19.058 * Looking for test storage... 00:11:19.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.058 --rc genhtml_branch_coverage=1 00:11:19.058 --rc genhtml_function_coverage=1 00:11:19.058 --rc genhtml_legend=1 00:11:19.058 --rc geninfo_all_blocks=1 00:11:19.058 --rc geninfo_unexecuted_blocks=1 00:11:19.058 00:11:19.058 ' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.058 --rc genhtml_branch_coverage=1 00:11:19.058 --rc genhtml_function_coverage=1 00:11:19.058 --rc genhtml_legend=1 00:11:19.058 --rc geninfo_all_blocks=1 00:11:19.058 --rc geninfo_unexecuted_blocks=1 00:11:19.058 00:11:19.058 ' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.058 --rc genhtml_branch_coverage=1 00:11:19.058 --rc genhtml_function_coverage=1 00:11:19.058 --rc genhtml_legend=1 00:11:19.058 --rc geninfo_all_blocks=1 00:11:19.058 --rc geninfo_unexecuted_blocks=1 00:11:19.058 00:11:19.058 ' 00:11:19.058 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.058 --rc genhtml_branch_coverage=1 00:11:19.058 --rc genhtml_function_coverage=1 00:11:19.058 --rc genhtml_legend=1 00:11:19.058 --rc geninfo_all_blocks=1 00:11:19.058 --rc geninfo_unexecuted_blocks=1 00:11:19.058 00:11:19.058 ' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.059 14:05:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.192 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.193 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.193 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.193 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.193 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:11:27.193 00:11:27.193 --- 10.0.0.2 ping statistics --- 00:11:27.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.193 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:27.193 00:11:27.193 --- 10.0.0.1 ping statistics --- 00:11:27.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.193 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.193 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2632368 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2632368 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2632368 ']' 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.194 14:05:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 [2024-12-06 14:05:14.950473] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:27.194 [2024-12-06 14:05:14.950538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.194 [2024-12-06 14:05:15.049903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.194 [2024-12-06 14:05:15.102715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.194 [2024-12-06 14:05:15.102764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.194 [2024-12-06 14:05:15.102773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.194 [2024-12-06 14:05:15.102780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.194 [2024-12-06 14:05:15.102786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.194 [2024-12-06 14:05:15.105184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.194 [2024-12-06 14:05:15.105347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.194 [2024-12-06 14:05:15.105515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:27.194 [2024-12-06 14:05:15.105550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.194 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 [2024-12-06 14:05:15.827466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 Malloc0 00:11:27.456 [2024-12-06 14:05:15.914082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2632572 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2632572 /var/tmp/bdevperf.sock 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2632572 ']' 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:27.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:27.456 { 00:11:27.456 "params": { 00:11:27.456 "name": "Nvme$subsystem", 00:11:27.456 "trtype": "$TEST_TRANSPORT", 00:11:27.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:27.456 "adrfam": "ipv4", 00:11:27.456 "trsvcid": "$NVMF_PORT", 00:11:27.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:27.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:27.456 "hdgst": ${hdgst:-false}, 00:11:27.456 "ddgst": ${ddgst:-false} 00:11:27.456 }, 00:11:27.456 "method": "bdev_nvme_attach_controller" 00:11:27.456 } 00:11:27.456 EOF 00:11:27.456 )") 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:27.456 14:05:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:27.456 "params": { 00:11:27.456 "name": "Nvme0", 00:11:27.456 "trtype": "tcp", 00:11:27.456 "traddr": "10.0.0.2", 00:11:27.456 "adrfam": "ipv4", 00:11:27.456 "trsvcid": "4420", 00:11:27.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:27.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:27.456 "hdgst": false, 00:11:27.456 "ddgst": false 00:11:27.456 }, 00:11:27.456 "method": "bdev_nvme_attach_controller" 00:11:27.456 }' 00:11:27.456 [2024-12-06 14:05:16.025852] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:27.456 [2024-12-06 14:05:16.025925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2632572 ] 00:11:27.717 [2024-12-06 14:05:16.121818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.717 [2024-12-06 14:05:16.174945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.977 Running I/O for 10 seconds... 00:11:28.237 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.237 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:28.237 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:28.237 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.237 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.501 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.501 [2024-12-06 14:05:16.945941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.501 [2024-12-06 14:05:16.946427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.502 [2024-12-06 14:05:16.946434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.502 [2024-12-06 14:05:16.946441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.502 [2024-12-06 14:05:16.946452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.502 [2024-12-06 14:05:16.946466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf17b0 is same with the state(6) to be set 00:11:28.502 [2024-12-06 14:05:16.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.946985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.946995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.502 [2024-12-06 14:05:16.947395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.502 [2024-12-06 14:05:16.947403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:28.503 [2024-12-06 14:05:16.947909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.947918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2f400 is same with the state(6) to be set 00:11:28.503 [2024-12-06 14:05:16.949233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:28.503 task offset: 98304 on job bdev=Nvme0n1 fails 00:11:28.503 00:11:28.503 Latency(us) 00:11:28.503 [2024-12-06T13:05:17.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.503 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:28.503 Job: Nvme0n1 ended in about 0.56 seconds with error 00:11:28.503 Verification LBA range: start 0x0 length 0x400 00:11:28.503 Nvme0n1 : 0.56 1380.84 86.30 115.07 0.00 41740.03 5789.01 36263.25 00:11:28.503 [2024-12-06T13:05:17.143Z] =================================================================================================================== 00:11:28.503 [2024-12-06T13:05:17.143Z] Total : 1380.84 86.30 115.07 0.00 41740.03 5789.01 36263.25 00:11:28.503 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.503 [2024-12-06 14:05:16.951500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:28.503 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:28.503 [2024-12-06 14:05:16.951541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1ec70 (9): Bad file descriptor 00:11:28.503 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.503 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.503 [2024-12-06 14:05:16.957486] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:28.503 [2024-12-06 14:05:16.957590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:28.503 [2024-12-06 14:05:16.957620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.503 [2024-12-06 14:05:16.957634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:28.503 [2024-12-06 14:05:16.957643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:28.503 [2024-12-06 14:05:16.957650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:28.503 [2024-12-06 14:05:16.957658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ec70 00:11:28.503 [2024-12-06 14:05:16.957680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1ec70 (9): Bad file descriptor 00:11:28.504 [2024-12-06 14:05:16.957694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:11:28.504 [2024-12-06 14:05:16.957702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:11:28.504 [2024-12-06 14:05:16.957719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:11:28.504 [2024-12-06 14:05:16.957730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:11:28.504 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.504 14:05:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2632572 00:11:29.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2632572) - No such process 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.445 { 00:11:29.445 "params": { 00:11:29.445 "name": "Nvme$subsystem", 00:11:29.445 "trtype": "$TEST_TRANSPORT", 00:11:29.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.445 "adrfam": "ipv4", 00:11:29.445 "trsvcid": "$NVMF_PORT", 00:11:29.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.445 "hdgst": ${hdgst:-false}, 00:11:29.445 "ddgst": ${ddgst:-false} 00:11:29.445 }, 00:11:29.445 "method": "bdev_nvme_attach_controller" 00:11:29.445 } 00:11:29.445 EOF 00:11:29.445 )") 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:29.445 14:05:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.445 "params": { 00:11:29.445 "name": "Nvme0", 00:11:29.446 "trtype": "tcp", 00:11:29.446 "traddr": "10.0.0.2", 00:11:29.446 "adrfam": "ipv4", 00:11:29.446 "trsvcid": "4420", 00:11:29.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:29.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:29.446 "hdgst": false, 00:11:29.446 "ddgst": false 00:11:29.446 }, 00:11:29.446 "method": "bdev_nvme_attach_controller" 00:11:29.446 }' 00:11:29.446 [2024-12-06 14:05:18.021836] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:29.446 [2024-12-06 14:05:18.021891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2633057 ] 00:11:29.706 [2024-12-06 14:05:18.109918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.706 [2024-12-06 14:05:18.144588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.966 Running I/O for 1 seconds... 00:11:30.907 1669.00 IOPS, 104.31 MiB/s 00:11:30.907 Latency(us) 00:11:30.907 [2024-12-06T13:05:19.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.907 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:30.907 Verification LBA range: start 0x0 length 0x400 00:11:30.907 Nvme0n1 : 1.04 1666.78 104.17 0.00 0.00 37730.96 3686.40 31894.19 00:11:30.907 [2024-12-06T13:05:19.547Z] =================================================================================================================== 00:11:30.907 [2024-12-06T13:05:19.547Z] Total : 1666.78 104.17 0.00 0.00 37730.96 3686.40 31894.19 00:11:30.907 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:30.907 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:30.907 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:30.907 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.167 rmmod nvme_tcp 00:11:31.167 rmmod nvme_fabrics 00:11:31.167 rmmod nvme_keyring 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2632368 ']' 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2632368 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2632368 ']' 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2632368 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:31.167 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632368 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632368' 00:11:31.168 killing process with pid 2632368 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2632368 00:11:31.168 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2632368 00:11:31.168 [2024-12-06 14:05:19.786729] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.428 14:05:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:33.336 00:11:33.336 real 0m14.674s 00:11:33.336 user 0m23.544s 00:11:33.336 sys 0m6.745s 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:33.336 ************************************ 00:11:33.336 END TEST nvmf_host_management 00:11:33.336 ************************************ 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.336 14:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.597 ************************************ 00:11:33.597 START TEST nvmf_lvol 00:11:33.597 ************************************ 00:11:33.597 14:05:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:33.597 * Looking for test storage... 00:11:33.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.597 --rc genhtml_branch_coverage=1 00:11:33.597 --rc genhtml_function_coverage=1 00:11:33.597 --rc genhtml_legend=1 00:11:33.597 --rc geninfo_all_blocks=1 00:11:33.597 --rc geninfo_unexecuted_blocks=1 00:11:33.597 00:11:33.597 ' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.597 --rc genhtml_branch_coverage=1 00:11:33.597 --rc genhtml_function_coverage=1 00:11:33.597 --rc genhtml_legend=1 00:11:33.597 --rc geninfo_all_blocks=1 00:11:33.597 --rc geninfo_unexecuted_blocks=1 00:11:33.597 00:11:33.597 ' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.597 --rc genhtml_branch_coverage=1 00:11:33.597 --rc genhtml_function_coverage=1 00:11:33.597 --rc genhtml_legend=1 00:11:33.597 --rc geninfo_all_blocks=1 00:11:33.597 --rc geninfo_unexecuted_blocks=1 00:11:33.597 00:11:33.597 ' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.597 --rc genhtml_branch_coverage=1 00:11:33.597 --rc genhtml_function_coverage=1 00:11:33.597 --rc genhtml_legend=1 00:11:33.597 --rc geninfo_all_blocks=1 00:11:33.597 --rc geninfo_unexecuted_blocks=1 00:11:33.597 00:11:33.597 ' 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.597 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.598 14:05:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:41.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:41.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:41.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:41.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.739 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:11:41.740 00:11:41.740 --- 10.0.0.2 ping statistics --- 00:11:41.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.740 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:11:41.740 00:11:41.740 --- 10.0.0.1 ping statistics --- 00:11:41.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.740 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2637537 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2637537 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2637537 ']' 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.740 14:05:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:41.740 [2024-12-06 14:05:29.785075] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:11:41.740 [2024-12-06 14:05:29.785137] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.740 [2024-12-06 14:05:29.884318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:41.740 [2024-12-06 14:05:29.937829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.740 [2024-12-06 14:05:29.937881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.740 [2024-12-06 14:05:29.937890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.740 [2024-12-06 14:05:29.937897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.740 [2024-12-06 14:05:29.937903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.740 [2024-12-06 14:05:29.939759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.740 [2024-12-06 14:05:29.939920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.740 [2024-12-06 14:05:29.939921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.001 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.001 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:11:42.001 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.001 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.001 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:42.262 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.262 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:42.262 [2024-12-06 14:05:30.832126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.262 14:05:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.522 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:42.522 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:42.782 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:42.782 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:43.042 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:43.303 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=616e7887-f780-4d89-a7ed-28871500b4b0 00:11:43.303 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 616e7887-f780-4d89-a7ed-28871500b4b0 lvol 20 00:11:43.303 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f33f0866-e5c7-4295-8189-4eb98119e30b 00:11:43.303 14:05:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:43.563 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f33f0866-e5c7-4295-8189-4eb98119e30b 00:11:43.823 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:44.085 [2024-12-06 14:05:32.465644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.085 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.085 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2638182 00:11:44.085 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:44.085 14:05:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:45.470 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f33f0866-e5c7-4295-8189-4eb98119e30b MY_SNAPSHOT 00:11:45.470 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e767e610-2c91-4786-830c-e984b7f347a7 00:11:45.470 14:05:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f33f0866-e5c7-4295-8189-4eb98119e30b 30 00:11:45.470 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e767e610-2c91-4786-830c-e984b7f347a7 MY_CLONE 00:11:45.731 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a1f873e1-5b8a-434c-8247-ed934392e109 00:11:45.731 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a1f873e1-5b8a-434c-8247-ed934392e109 00:11:46.299 14:05:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2638182 00:11:54.437 Initializing NVMe Controllers 00:11:54.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:54.437 Controller IO queue size 128, less than required. 00:11:54.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:54.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:54.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:54.437 Initialization complete. Launching workers. 00:11:54.437 ======================================================== 00:11:54.437 Latency(us) 00:11:54.437 Device Information : IOPS MiB/s Average min max 00:11:54.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16055.20 62.72 7974.55 1493.34 62271.40 00:11:54.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17038.20 66.56 7512.35 649.31 48062.98 00:11:54.437 ======================================================== 00:11:54.437 Total : 33093.40 129.27 7736.59 649.31 62271.40 00:11:54.437 00:11:54.437 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:54.698 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f33f0866-e5c7-4295-8189-4eb98119e30b 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 616e7887-f780-4d89-a7ed-28871500b4b0 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.958 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.958 rmmod nvme_tcp 00:11:55.218 rmmod nvme_fabrics 00:11:55.218 rmmod nvme_keyring 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2637537 ']' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2637537 ']' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637537' 00:11:55.218 killing process with pid 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2637537 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.218 14:05:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.781 00:11:57.781 real 0m23.947s 00:11:57.781 user 1m4.872s 00:11:57.781 sys 0m8.604s 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:57.781 ************************************ 00:11:57.781 END TEST nvmf_lvol 00:11:57.781 ************************************ 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.781 14:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:57.781 ************************************ 00:11:57.781 START TEST nvmf_lvs_grow 00:11:57.781 ************************************ 00:11:57.781 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:57.781 * Looking for test storage... 00:11:57.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.781 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:57.781 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:11:57.781 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:57.781 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.782 --rc genhtml_branch_coverage=1 00:11:57.782 --rc genhtml_function_coverage=1 00:11:57.782 --rc genhtml_legend=1 00:11:57.782 --rc geninfo_all_blocks=1 00:11:57.782 --rc geninfo_unexecuted_blocks=1 00:11:57.782 00:11:57.782 ' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.782 --rc genhtml_branch_coverage=1 00:11:57.782 --rc genhtml_function_coverage=1 00:11:57.782 --rc genhtml_legend=1 00:11:57.782 --rc geninfo_all_blocks=1 00:11:57.782 --rc geninfo_unexecuted_blocks=1 00:11:57.782 00:11:57.782 ' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.782 --rc genhtml_branch_coverage=1 00:11:57.782 --rc genhtml_function_coverage=1 00:11:57.782 --rc genhtml_legend=1 00:11:57.782 --rc geninfo_all_blocks=1 00:11:57.782 --rc geninfo_unexecuted_blocks=1 00:11:57.782 00:11:57.782 ' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.782 --rc genhtml_branch_coverage=1 00:11:57.782 --rc genhtml_function_coverage=1 00:11:57.782 --rc genhtml_legend=1 00:11:57.782 --rc geninfo_all_blocks=1 00:11:57.782 --rc geninfo_unexecuted_blocks=1 00:11:57.782 00:11:57.782 ' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.782 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.783 14:05:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.160 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:06.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:06.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:06.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:06.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:12:06.161 00:12:06.161 --- 10.0.0.2 ping statistics --- 00:12:06.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.161 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:12:06.161 00:12:06.161 --- 10.0.0.1 ping statistics --- 00:12:06.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.161 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2644594 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2644594 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2644594 ']' 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.161 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.162 14:05:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.162 [2024-12-06 14:05:53.827533] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:06.162 [2024-12-06 14:05:53.827598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.162 [2024-12-06 14:05:53.928517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.162 [2024-12-06 14:05:53.979587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.162 [2024-12-06 14:05:53.979636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.162 [2024-12-06 14:05:53.979644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.162 [2024-12-06 14:05:53.979652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.162 [2024-12-06 14:05:53.979657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.162 [2024-12-06 14:05:53.980468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.162 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:06.421 [2024-12-06 14:05:54.844737] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:06.421 ************************************ 00:12:06.421 START TEST lvs_grow_clean 00:12:06.421 ************************************ 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.421 14:05:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.681 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:06.681 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:06.941 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a lvol 150 00:12:07.201 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=47e4fa6c-0ce4-4053-94af-def540308997 00:12:07.201 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:07.201 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:07.461 [2024-12-06 14:05:55.885991] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:07.461 [2024-12-06 14:05:55.886060] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:07.461 true 00:12:07.461 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:07.461 14:05:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:07.461 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:07.461 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:07.721 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47e4fa6c-0ce4-4053-94af-def540308997 00:12:07.981 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:07.981 [2024-12-06 14:05:56.596247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.981 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2645277 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2645277 /var/tmp/bdevperf.sock 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2645277 ']' 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.240 14:05:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:08.240 [2024-12-06 14:05:56.852726] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:08.240 [2024-12-06 14:05:56.852793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645277 ] 00:12:08.501 [2024-12-06 14:05:56.945307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.501 [2024-12-06 14:05:56.997361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.070 14:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.070 14:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:09.070 14:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:09.331 Nvme0n1 00:12:09.331 14:05:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:09.591 [ 00:12:09.591 { 00:12:09.591 "name": "Nvme0n1", 00:12:09.591 "aliases": [ 00:12:09.591 "47e4fa6c-0ce4-4053-94af-def540308997" 00:12:09.591 ], 00:12:09.591 "product_name": "NVMe disk", 00:12:09.591 "block_size": 4096, 00:12:09.591 "num_blocks": 38912, 00:12:09.591 "uuid": "47e4fa6c-0ce4-4053-94af-def540308997", 00:12:09.591 "numa_id": 0, 00:12:09.591 "assigned_rate_limits": { 00:12:09.591 "rw_ios_per_sec": 0, 00:12:09.591 "rw_mbytes_per_sec": 0, 00:12:09.591 "r_mbytes_per_sec": 0, 00:12:09.591 "w_mbytes_per_sec": 0 00:12:09.591 }, 00:12:09.591 "claimed": false, 00:12:09.591 "zoned": false, 00:12:09.591 "supported_io_types": { 00:12:09.591 "read": true, 00:12:09.591 "write": true, 00:12:09.591 "unmap": true, 00:12:09.591 "flush": true, 00:12:09.591 "reset": true, 00:12:09.591 "nvme_admin": true, 00:12:09.591 "nvme_io": true, 00:12:09.591 "nvme_io_md": false, 00:12:09.591 "write_zeroes": true, 00:12:09.591 "zcopy": false, 00:12:09.591 "get_zone_info": false, 00:12:09.591 "zone_management": false, 00:12:09.591 "zone_append": false, 00:12:09.591 "compare": true, 00:12:09.591 "compare_and_write": true, 00:12:09.591 "abort": true, 00:12:09.591 "seek_hole": false, 00:12:09.591 "seek_data": false, 00:12:09.591 "copy": true, 00:12:09.591 "nvme_iov_md": false 00:12:09.591 }, 00:12:09.591 "memory_domains": [ 00:12:09.591 { 00:12:09.591 "dma_device_id": "system", 00:12:09.591 "dma_device_type": 1 00:12:09.591 } 00:12:09.591 ], 00:12:09.591 "driver_specific": { 00:12:09.591 "nvme": [ 00:12:09.591 { 00:12:09.591 "trid": { 00:12:09.591 "trtype": "TCP", 00:12:09.591 "adrfam": "IPv4", 00:12:09.591 "traddr": "10.0.0.2", 00:12:09.591 "trsvcid": "4420", 00:12:09.591 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:09.591 }, 00:12:09.591 "ctrlr_data": { 00:12:09.591 "cntlid": 1, 00:12:09.591 "vendor_id": "0x8086", 00:12:09.591 "model_number": "SPDK bdev Controller", 00:12:09.591 "serial_number": "SPDK0", 00:12:09.591 "firmware_revision": "25.01", 00:12:09.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:09.591 "oacs": { 00:12:09.591 "security": 0, 00:12:09.591 "format": 0, 00:12:09.591 "firmware": 0, 00:12:09.591 "ns_manage": 0 00:12:09.591 }, 00:12:09.591 "multi_ctrlr": true, 00:12:09.591 "ana_reporting": false 00:12:09.591 }, 00:12:09.591 "vs": { 00:12:09.591 "nvme_version": "1.3" 00:12:09.591 }, 00:12:09.591 "ns_data": { 00:12:09.591 "id": 1, 00:12:09.591 "can_share": true 00:12:09.591 } 00:12:09.591 } 00:12:09.591 ], 00:12:09.591 "mp_policy": "active_passive" 00:12:09.591 } 00:12:09.591 } 00:12:09.591 ] 00:12:09.591 14:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2645611 00:12:09.591 14:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:09.591 14:05:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:09.591 Running I/O for 10 seconds... 00:12:10.970 Latency(us) 00:12:10.970 [2024-12-06T13:05:59.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.970 Nvme0n1 : 1.00 24511.00 95.75 0.00 0.00 0.00 0.00 0.00 00:12:10.970 [2024-12-06T13:05:59.610Z] =================================================================================================================== 00:12:10.970 [2024-12-06T13:05:59.610Z] Total : 24511.00 95.75 0.00 0.00 0.00 0.00 0.00 00:12:10.970 00:12:11.539 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:11.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.799 Nvme0n1 : 2.00 24615.50 96.15 0.00 0.00 0.00 0.00 0.00 00:12:11.799 [2024-12-06T13:06:00.439Z] =================================================================================================================== 00:12:11.799 [2024-12-06T13:06:00.439Z] Total : 24615.50 96.15 0.00 0.00 0.00 0.00 0.00 00:12:11.799 00:12:11.799 true 00:12:11.799 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:11.799 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:12.059 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:12.059 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:12.059 14:06:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2645611 00:12:12.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.630 Nvme0n1 : 3.00 24645.00 96.27 0.00 0.00 0.00 0.00 0.00 00:12:12.630 [2024-12-06T13:06:01.270Z] =================================================================================================================== 00:12:12.630 [2024-12-06T13:06:01.270Z] Total : 24645.00 96.27 0.00 0.00 0.00 0.00 0.00 00:12:12.630 00:12:14.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.031 Nvme0n1 : 4.00 24699.75 96.48 0.00 0.00 0.00 0.00 0.00 00:12:14.031 [2024-12-06T13:06:02.671Z] =================================================================================================================== 00:12:14.031 [2024-12-06T13:06:02.671Z] Total : 24699.75 96.48 0.00 0.00 0.00 0.00 0.00 00:12:14.031 00:12:14.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.602 Nvme0n1 : 5.00 24739.00 96.64 0.00 0.00 0.00 0.00 0.00 00:12:14.602 [2024-12-06T13:06:03.242Z] =================================================================================================================== 00:12:14.602 [2024-12-06T13:06:03.242Z] Total : 24739.00 96.64 0.00 0.00 0.00 0.00 0.00 00:12:14.602 00:12:15.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.986 Nvme0n1 : 6.00 24773.17 96.77 0.00 0.00 0.00 0.00 0.00 00:12:15.986 [2024-12-06T13:06:04.626Z] =================================================================================================================== 00:12:15.986 [2024-12-06T13:06:04.626Z] Total : 24773.17 96.77 0.00 0.00 0.00 0.00 0.00 00:12:15.986 00:12:16.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.927 Nvme0n1 : 7.00 24796.43 96.86 0.00 0.00 0.00 0.00 0.00 00:12:16.927 [2024-12-06T13:06:05.567Z] =================================================================================================================== 00:12:16.927 [2024-12-06T13:06:05.567Z] Total : 24796.43 96.86 0.00 0.00 0.00 0.00 0.00 00:12:16.927 00:12:17.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.866 Nvme0n1 : 8.00 24812.88 96.93 0.00 0.00 0.00 0.00 0.00 00:12:17.866 [2024-12-06T13:06:06.506Z] =================================================================================================================== 00:12:17.866 [2024-12-06T13:06:06.506Z] Total : 24812.88 96.93 0.00 0.00 0.00 0.00 0.00 00:12:17.867 00:12:18.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.806 Nvme0n1 : 9.00 24827.44 96.98 0.00 0.00 0.00 0.00 0.00 00:12:18.806 [2024-12-06T13:06:07.446Z] =================================================================================================================== 00:12:18.806 [2024-12-06T13:06:07.446Z] Total : 24827.44 96.98 0.00 0.00 0.00 0.00 0.00 00:12:18.806 00:12:19.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.747 Nvme0n1 : 10.00 24835.90 97.02 0.00 0.00 0.00 0.00 0.00 00:12:19.747 [2024-12-06T13:06:08.387Z] =================================================================================================================== 00:12:19.747 [2024-12-06T13:06:08.387Z] Total : 24835.90 97.02 0.00 0.00 0.00 0.00 0.00 00:12:19.747 00:12:19.747 00:12:19.747 Latency(us) 00:12:19.747 [2024-12-06T13:06:08.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.747 Nvme0n1 : 10.01 24835.82 97.01 0.00 0.00 5149.66 3454.29 10103.47 00:12:19.747 [2024-12-06T13:06:08.387Z] =================================================================================================================== 00:12:19.747 [2024-12-06T13:06:08.387Z] Total : 24835.82 97.01 0.00 0.00 5149.66 3454.29 10103.47 00:12:19.747 { 00:12:19.747 "results": [ 00:12:19.747 { 00:12:19.747 "job": "Nvme0n1", 00:12:19.747 "core_mask": "0x2", 00:12:19.747 "workload": "randwrite", 00:12:19.747 "status": "finished", 00:12:19.747 "queue_depth": 128, 00:12:19.747 "io_size": 4096, 00:12:19.747 "runtime": 10.005187, 00:12:19.747 "iops": 24835.817661379042, 00:12:19.747 "mibps": 97.01491273976188, 00:12:19.747 "io_failed": 0, 00:12:19.747 "io_timeout": 0, 00:12:19.747 "avg_latency_us": 5149.65587715521, 00:12:19.747 "min_latency_us": 3454.2933333333335, 00:12:19.747 "max_latency_us": 10103.466666666667 00:12:19.747 } 00:12:19.747 ], 00:12:19.747 "core_count": 1 00:12:19.747 } 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2645277 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2645277 ']' 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2645277 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.747 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2645277 00:12:19.748 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:19.748 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:19.748 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2645277' 00:12:19.748 killing process with pid 2645277 00:12:19.748 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2645277 00:12:19.748 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.748 00:12:19.748 Latency(us) 00:12:19.748 [2024-12-06T13:06:08.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.748 [2024-12-06T13:06:08.388Z] =================================================================================================================== 00:12:19.748 [2024-12-06T13:06:08.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.748 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2645277 00:12:20.008 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.008 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:20.269 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:20.269 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:20.528 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:20.528 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:20.528 14:06:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:20.528 [2024-12-06 14:06:09.132669] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:20.788 request: 00:12:20.788 { 00:12:20.788 "uuid": "a2bbac0e-d9b4-43e8-8908-d623b41d7f8a", 00:12:20.788 "method": "bdev_lvol_get_lvstores", 00:12:20.788 "req_id": 1 00:12:20.788 } 00:12:20.788 Got JSON-RPC error response 00:12:20.788 response: 00:12:20.788 { 00:12:20.788 "code": -19, 00:12:20.788 "message": "No such device" 00:12:20.788 } 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.788 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:21.048 aio_bdev 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 47e4fa6c-0ce4-4053-94af-def540308997 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=47e4fa6c-0ce4-4053-94af-def540308997 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.048 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:21.308 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 47e4fa6c-0ce4-4053-94af-def540308997 -t 2000 00:12:21.308 [ 00:12:21.308 { 00:12:21.308 "name": "47e4fa6c-0ce4-4053-94af-def540308997", 00:12:21.308 "aliases": [ 00:12:21.308 "lvs/lvol" 00:12:21.308 ], 00:12:21.308 "product_name": "Logical Volume", 00:12:21.308 "block_size": 4096, 00:12:21.308 "num_blocks": 38912, 00:12:21.308 "uuid": "47e4fa6c-0ce4-4053-94af-def540308997", 00:12:21.308 "assigned_rate_limits": { 00:12:21.308 "rw_ios_per_sec": 0, 00:12:21.308 "rw_mbytes_per_sec": 0, 00:12:21.308 "r_mbytes_per_sec": 0, 00:12:21.308 "w_mbytes_per_sec": 0 00:12:21.308 }, 00:12:21.308 "claimed": false, 00:12:21.308 "zoned": false, 00:12:21.308 "supported_io_types": { 00:12:21.308 "read": true, 00:12:21.308 "write": true, 00:12:21.308 "unmap": true, 00:12:21.308 "flush": false, 00:12:21.308 "reset": true, 00:12:21.308 "nvme_admin": false, 00:12:21.308 "nvme_io": false, 00:12:21.308 "nvme_io_md": false, 00:12:21.308 "write_zeroes": true, 00:12:21.308 "zcopy": false, 00:12:21.308 "get_zone_info": false, 00:12:21.308 "zone_management": false, 00:12:21.308 "zone_append": false, 00:12:21.308 "compare": false, 00:12:21.308 "compare_and_write": false, 00:12:21.308 "abort": false, 00:12:21.308 "seek_hole": true, 00:12:21.308 "seek_data": true, 00:12:21.308 "copy": false, 00:12:21.308 "nvme_iov_md": false 00:12:21.308 }, 00:12:21.308 "driver_specific": { 00:12:21.308 "lvol": { 00:12:21.308 "lvol_store_uuid": "a2bbac0e-d9b4-43e8-8908-d623b41d7f8a", 00:12:21.308 "base_bdev": "aio_bdev", 00:12:21.308 "thin_provision": false, 00:12:21.308 "num_allocated_clusters": 38, 00:12:21.308 "snapshot": false, 00:12:21.308 "clone": false, 00:12:21.308 "esnap_clone": false 00:12:21.308 } 00:12:21.308 } 00:12:21.308 } 00:12:21.308 ] 00:12:21.308 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:21.308 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:21.309 14:06:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:21.569 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:21.569 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:21.569 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:21.829 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:21.829 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47e4fa6c-0ce4-4053-94af-def540308997 00:12:21.829 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2bbac0e-d9b4-43e8-8908-d623b41d7f8a 00:12:22.088 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.349 00:12:22.349 real 0m15.846s 00:12:22.349 user 0m15.382s 00:12:22.349 sys 0m1.552s 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:22.349 ************************************ 00:12:22.349 END TEST lvs_grow_clean 00:12:22.349 ************************************ 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:22.349 ************************************ 00:12:22.349 START TEST lvs_grow_dirty 00:12:22.349 ************************************ 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:22.349 14:06:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:22.610 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:22.610 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1c4275eb-c709-4894-946f-e8322b34c7da 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:22.870 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1c4275eb-c709-4894-946f-e8322b34c7da lvol 150 00:12:23.129 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:23.129 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:23.129 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:23.129 [2024-12-06 14:06:11.747785] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:23.129 [2024-12-06 14:06:11.747825] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:23.129 true 00:12:23.129 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:23.129 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:23.389 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:23.389 14:06:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:23.648 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:23.648 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:23.908 [2024-12-06 14:06:12.393661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.908 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2648945 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2648945 /var/tmp/bdevperf.sock 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2648945 ']' 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.169 14:06:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.169 [2024-12-06 14:06:12.633872] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:24.169 [2024-12-06 14:06:12.633924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648945 ] 00:12:24.169 [2024-12-06 14:06:12.718663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.169 [2024-12-06 14:06:12.748661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.108 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.108 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:25.108 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:25.108 Nvme0n1 00:12:25.108 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:25.369 [ 00:12:25.369 { 00:12:25.369 "name": "Nvme0n1", 00:12:25.369 "aliases": [ 00:12:25.369 "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd" 00:12:25.369 ], 00:12:25.369 "product_name": "NVMe disk", 00:12:25.369 "block_size": 4096, 00:12:25.369 "num_blocks": 38912, 00:12:25.369 "uuid": "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd", 00:12:25.369 "numa_id": 0, 00:12:25.369 "assigned_rate_limits": { 00:12:25.369 "rw_ios_per_sec": 0, 00:12:25.369 "rw_mbytes_per_sec": 0, 00:12:25.369 "r_mbytes_per_sec": 0, 00:12:25.369 "w_mbytes_per_sec": 0 00:12:25.369 }, 00:12:25.369 "claimed": false, 00:12:25.369 "zoned": false, 00:12:25.369 "supported_io_types": { 00:12:25.369 "read": true, 00:12:25.369 "write": true, 00:12:25.369 "unmap": true, 00:12:25.369 "flush": true, 00:12:25.369 "reset": true, 00:12:25.369 "nvme_admin": true, 00:12:25.369 "nvme_io": true, 00:12:25.369 "nvme_io_md": false, 00:12:25.369 "write_zeroes": true, 00:12:25.369 "zcopy": false, 00:12:25.369 "get_zone_info": false, 00:12:25.369 "zone_management": false, 00:12:25.369 "zone_append": false, 00:12:25.369 "compare": true, 00:12:25.369 "compare_and_write": true, 00:12:25.369 "abort": true, 00:12:25.369 "seek_hole": false, 00:12:25.369 "seek_data": false, 00:12:25.369 "copy": true, 00:12:25.369 "nvme_iov_md": false 00:12:25.369 }, 00:12:25.369 "memory_domains": [ 00:12:25.369 { 00:12:25.369 "dma_device_id": "system", 00:12:25.369 "dma_device_type": 1 00:12:25.369 } 00:12:25.369 ], 00:12:25.369 "driver_specific": { 00:12:25.369 "nvme": [ 00:12:25.369 { 00:12:25.369 "trid": { 00:12:25.369 "trtype": "TCP", 00:12:25.369 "adrfam": "IPv4", 00:12:25.369 "traddr": "10.0.0.2", 00:12:25.369 "trsvcid": "4420", 00:12:25.369 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:25.369 }, 00:12:25.369 "ctrlr_data": { 00:12:25.369 "cntlid": 1, 00:12:25.369 "vendor_id": "0x8086", 00:12:25.369 "model_number": "SPDK bdev Controller", 00:12:25.369 "serial_number": "SPDK0", 00:12:25.369 "firmware_revision": "25.01", 00:12:25.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:25.369 "oacs": { 00:12:25.369 "security": 0, 00:12:25.369 "format": 0, 00:12:25.369 "firmware": 0, 00:12:25.369 "ns_manage": 0 00:12:25.369 }, 00:12:25.369 "multi_ctrlr": true, 00:12:25.369 "ana_reporting": false 00:12:25.369 }, 00:12:25.369 "vs": { 00:12:25.369 "nvme_version": "1.3" 00:12:25.369 }, 00:12:25.369 "ns_data": { 00:12:25.369 "id": 1, 00:12:25.369 "can_share": true 00:12:25.369 } 00:12:25.369 } 00:12:25.369 ], 00:12:25.369 "mp_policy": "active_passive" 00:12:25.369 } 00:12:25.369 } 00:12:25.369 ] 00:12:25.369 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2649272 00:12:25.369 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:25.369 14:06:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:25.369 Running I/O for 10 seconds... 00:12:26.320 Latency(us) 00:12:26.320 [2024-12-06T13:06:14.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.320 Nvme0n1 : 1.00 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:12:26.320 [2024-12-06T13:06:14.960Z] =================================================================================================================== 00:12:26.320 [2024-12-06T13:06:14.960Z] Total : 25111.00 98.09 0.00 0.00 0.00 0.00 0.00 00:12:26.320 00:12:27.262 14:06:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:27.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.522 Nvme0n1 : 2.00 25322.50 98.92 0.00 0.00 0.00 0.00 0.00 00:12:27.522 [2024-12-06T13:06:16.162Z] =================================================================================================================== 00:12:27.522 [2024-12-06T13:06:16.162Z] Total : 25322.50 98.92 0.00 0.00 0.00 0.00 0.00 00:12:27.522 00:12:27.522 true 00:12:27.522 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:27.522 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:27.782 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:27.782 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:27.782 14:06:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2649272 00:12:28.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.351 Nvme0n1 : 3.00 25393.33 99.19 0.00 0.00 0.00 0.00 0.00 00:12:28.351 [2024-12-06T13:06:16.991Z] =================================================================================================================== 00:12:28.351 [2024-12-06T13:06:16.991Z] Total : 25393.33 99.19 0.00 0.00 0.00 0.00 0.00 00:12:28.351 00:12:29.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.291 Nvme0n1 : 4.00 25460.75 99.46 0.00 0.00 0.00 0.00 0.00 00:12:29.291 [2024-12-06T13:06:17.931Z] =================================================================================================================== 00:12:29.291 [2024-12-06T13:06:17.931Z] Total : 25460.75 99.46 0.00 0.00 0.00 0.00 0.00 00:12:29.291 00:12:30.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.676 Nvme0n1 : 5.00 25500.80 99.61 0.00 0.00 0.00 0.00 0.00 00:12:30.676 [2024-12-06T13:06:19.316Z] =================================================================================================================== 00:12:30.676 [2024-12-06T13:06:19.316Z] Total : 25500.80 99.61 0.00 0.00 0.00 0.00 0.00 00:12:30.676 00:12:31.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.617 Nvme0n1 : 6.00 25528.00 99.72 0.00 0.00 0.00 0.00 0.00 00:12:31.617 [2024-12-06T13:06:20.257Z] =================================================================================================================== 00:12:31.617 [2024-12-06T13:06:20.257Z] Total : 25528.00 99.72 0.00 0.00 0.00 0.00 0.00 00:12:31.617 00:12:32.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.561 Nvme0n1 : 7.00 25538.14 99.76 0.00 0.00 0.00 0.00 0.00 00:12:32.561 [2024-12-06T13:06:21.202Z] =================================================================================================================== 00:12:32.562 [2024-12-06T13:06:21.202Z] Total : 25538.14 99.76 0.00 0.00 0.00 0.00 0.00 00:12:32.562 00:12:33.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.507 Nvme0n1 : 8.00 25561.75 99.85 0.00 0.00 0.00 0.00 0.00 00:12:33.507 [2024-12-06T13:06:22.147Z] =================================================================================================================== 00:12:33.507 [2024-12-06T13:06:22.147Z] Total : 25561.75 99.85 0.00 0.00 0.00 0.00 0.00 00:12:33.507 00:12:34.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.445 Nvme0n1 : 9.00 25580.33 99.92 0.00 0.00 0.00 0.00 0.00 00:12:34.445 [2024-12-06T13:06:23.085Z] =================================================================================================================== 00:12:34.445 [2024-12-06T13:06:23.085Z] Total : 25580.33 99.92 0.00 0.00 0.00 0.00 0.00 00:12:34.445 00:12:35.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.514 Nvme0n1 : 10.00 25594.70 99.98 0.00 0.00 0.00 0.00 0.00 00:12:35.514 [2024-12-06T13:06:24.154Z] =================================================================================================================== 00:12:35.514 [2024-12-06T13:06:24.154Z] Total : 25594.70 99.98 0.00 0.00 0.00 0.00 0.00 00:12:35.514 00:12:35.514 00:12:35.514 Latency(us) 00:12:35.514 [2024-12-06T13:06:24.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.514 Nvme0n1 : 10.00 25594.25 99.98 0.00 0.00 4997.80 3072.00 14964.05 00:12:35.514 [2024-12-06T13:06:24.154Z] =================================================================================================================== 00:12:35.514 [2024-12-06T13:06:24.154Z] Total : 25594.25 99.98 0.00 0.00 4997.80 3072.00 14964.05 00:12:35.514 { 00:12:35.514 "results": [ 00:12:35.514 { 00:12:35.514 "job": "Nvme0n1", 00:12:35.514 "core_mask": "0x2", 00:12:35.514 "workload": "randwrite", 00:12:35.514 "status": "finished", 00:12:35.514 "queue_depth": 128, 00:12:35.514 "io_size": 4096, 00:12:35.514 "runtime": 10.003378, 00:12:35.515 "iops": 25594.254260910664, 00:12:35.515 "mibps": 99.97755570668228, 00:12:35.515 "io_failed": 0, 00:12:35.515 "io_timeout": 0, 00:12:35.515 "avg_latency_us": 4997.804189160864, 00:12:35.515 "min_latency_us": 3072.0, 00:12:35.515 "max_latency_us": 14964.053333333333 00:12:35.515 } 00:12:35.515 ], 00:12:35.515 "core_count": 1 00:12:35.515 } 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2648945 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2648945 ']' 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2648945 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.515 14:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648945 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648945' 00:12:35.515 killing process with pid 2648945 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2648945 00:12:35.515 Received shutdown signal, test time was about 10.000000 seconds 00:12:35.515 00:12:35.515 Latency(us) 00:12:35.515 [2024-12-06T13:06:24.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.515 [2024-12-06T13:06:24.155Z] =================================================================================================================== 00:12:35.515 [2024-12-06T13:06:24.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2648945 00:12:35.515 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.773 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.031 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:36.031 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2644594 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2644594 00:12:36.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2644594 Killed "${NVMF_APP[@]}" "$@" 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2651435 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2651435 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2651435 ']' 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.290 14:06:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:36.290 [2024-12-06 14:06:24.805446] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:36.290 [2024-12-06 14:06:24.805513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.290 [2024-12-06 14:06:24.897670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.549 [2024-12-06 14:06:24.929294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.549 [2024-12-06 14:06:24.929321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.549 [2024-12-06 14:06:24.929326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.549 [2024-12-06 14:06:24.929331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.549 [2024-12-06 14:06:24.929335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.549 [2024-12-06 14:06:24.929823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.117 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:37.376 [2024-12-06 14:06:25.789552] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:37.376 [2024-12-06 14:06:25.789625] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:37.376 [2024-12-06 14:06:25.789648] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:37.376 14:06:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd -t 2000 00:12:37.635 [ 00:12:37.635 { 00:12:37.635 "name": "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd", 00:12:37.635 "aliases": [ 00:12:37.635 "lvs/lvol" 00:12:37.635 ], 00:12:37.635 "product_name": "Logical Volume", 00:12:37.635 "block_size": 4096, 00:12:37.635 "num_blocks": 38912, 00:12:37.635 "uuid": "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd", 00:12:37.635 "assigned_rate_limits": { 00:12:37.635 "rw_ios_per_sec": 0, 00:12:37.635 "rw_mbytes_per_sec": 0, 00:12:37.635 "r_mbytes_per_sec": 0, 00:12:37.635 "w_mbytes_per_sec": 0 00:12:37.635 }, 00:12:37.635 "claimed": false, 00:12:37.635 "zoned": false, 00:12:37.635 "supported_io_types": { 00:12:37.635 "read": true, 00:12:37.635 "write": true, 00:12:37.635 "unmap": true, 00:12:37.635 "flush": false, 00:12:37.635 "reset": true, 00:12:37.635 "nvme_admin": false, 00:12:37.635 "nvme_io": false, 00:12:37.635 "nvme_io_md": false, 00:12:37.635 "write_zeroes": true, 00:12:37.635 "zcopy": false, 00:12:37.635 "get_zone_info": false, 00:12:37.635 "zone_management": false, 00:12:37.635 "zone_append": false, 00:12:37.635 "compare": false, 00:12:37.635 "compare_and_write": false, 00:12:37.635 "abort": false, 00:12:37.635 "seek_hole": true, 00:12:37.635 "seek_data": true, 00:12:37.635 "copy": false, 00:12:37.635 "nvme_iov_md": false 00:12:37.635 }, 00:12:37.635 "driver_specific": { 00:12:37.635 "lvol": { 00:12:37.635 "lvol_store_uuid": "1c4275eb-c709-4894-946f-e8322b34c7da", 00:12:37.635 "base_bdev": "aio_bdev", 00:12:37.635 "thin_provision": false, 00:12:37.635 "num_allocated_clusters": 38, 00:12:37.635 "snapshot": false, 00:12:37.635 "clone": false, 00:12:37.635 "esnap_clone": false 00:12:37.635 } 00:12:37.635 } 00:12:37.635 } 00:12:37.635 ] 00:12:37.635 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:37.635 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:37.635 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:37.895 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:37.895 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:37.895 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:37.895 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:37.895 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:38.155 [2024-12-06 14:06:26.626197] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:38.155 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:38.414 request: 00:12:38.414 { 00:12:38.414 "uuid": "1c4275eb-c709-4894-946f-e8322b34c7da", 00:12:38.414 "method": "bdev_lvol_get_lvstores", 00:12:38.414 "req_id": 1 00:12:38.414 } 00:12:38.414 Got JSON-RPC error response 00:12:38.414 response: 00:12:38.414 { 00:12:38.414 "code": -19, 00:12:38.414 "message": "No such device" 00:12:38.414 } 00:12:38.414 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:38.415 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.415 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.415 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.415 14:06:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:38.415 aio_bdev 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:38.674 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd -t 2000 00:12:38.934 [ 00:12:38.934 { 00:12:38.934 "name": "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd", 00:12:38.934 "aliases": [ 00:12:38.934 "lvs/lvol" 00:12:38.934 ], 00:12:38.934 "product_name": "Logical Volume", 00:12:38.934 "block_size": 4096, 00:12:38.934 "num_blocks": 38912, 00:12:38.934 "uuid": "f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd", 00:12:38.934 "assigned_rate_limits": { 00:12:38.934 "rw_ios_per_sec": 0, 00:12:38.934 "rw_mbytes_per_sec": 0, 00:12:38.934 "r_mbytes_per_sec": 0, 00:12:38.934 "w_mbytes_per_sec": 0 00:12:38.934 }, 00:12:38.934 "claimed": false, 00:12:38.934 "zoned": false, 00:12:38.934 "supported_io_types": { 00:12:38.934 "read": true, 00:12:38.934 "write": true, 00:12:38.934 "unmap": true, 00:12:38.934 "flush": false, 00:12:38.934 "reset": true, 00:12:38.934 "nvme_admin": false, 00:12:38.934 "nvme_io": false, 00:12:38.934 "nvme_io_md": false, 00:12:38.934 "write_zeroes": true, 00:12:38.934 "zcopy": false, 00:12:38.934 "get_zone_info": false, 00:12:38.934 "zone_management": false, 00:12:38.934 "zone_append": false, 00:12:38.934 "compare": false, 00:12:38.934 "compare_and_write": false, 00:12:38.934 "abort": false, 00:12:38.934 "seek_hole": true, 00:12:38.934 "seek_data": true, 00:12:38.934 "copy": false, 00:12:38.934 "nvme_iov_md": false 00:12:38.934 }, 00:12:38.934 "driver_specific": { 00:12:38.934 "lvol": { 00:12:38.934 "lvol_store_uuid": "1c4275eb-c709-4894-946f-e8322b34c7da", 00:12:38.934 "base_bdev": "aio_bdev", 00:12:38.934 "thin_provision": false, 00:12:38.934 "num_allocated_clusters": 38, 00:12:38.934 "snapshot": false, 00:12:38.934 "clone": false, 00:12:38.934 "esnap_clone": false 00:12:38.934 } 00:12:38.934 } 00:12:38.934 } 00:12:38.934 ] 00:12:38.934 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:38.934 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:38.934 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:39.215 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:39.215 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:39.215 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:39.215 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:39.215 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f6e11ae3-466a-40e8-bdbd-f6ffa9fd38dd 00:12:39.476 14:06:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c4275eb-c709-4894-946f-e8322b34c7da 00:12:39.737 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:39.737 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:39.737 00:12:39.737 real 0m17.519s 00:12:39.737 user 0m45.489s 00:12:39.737 sys 0m3.147s 00:12:39.737 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.737 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:39.737 ************************************ 00:12:39.737 END TEST lvs_grow_dirty 00:12:39.737 ************************************ 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:39.997 nvmf_trace.0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.997 rmmod nvme_tcp 00:12:39.997 rmmod nvme_fabrics 00:12:39.997 rmmod nvme_keyring 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:39.997 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2651435 ']' 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2651435 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2651435 ']' 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2651435 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2651435 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2651435' 00:12:39.998 killing process with pid 2651435 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2651435 00:12:39.998 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2651435 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.257 14:06:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.169 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.169 00:12:42.169 real 0m44.800s 00:12:42.169 user 1m7.355s 00:12:42.169 sys 0m10.852s 00:12:42.169 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.169 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.169 ************************************ 00:12:42.169 END TEST nvmf_lvs_grow 00:12:42.169 ************************************ 00:12:42.429 14:06:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:42.429 14:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.429 14:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.429 14:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:42.429 ************************************ 00:12:42.430 START TEST nvmf_bdev_io_wait 00:12:42.430 ************************************ 00:12:42.430 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:42.430 * Looking for test storage... 00:12:42.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.430 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.430 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.430 14:06:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:42.430 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:42.430 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:42.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.691 --rc genhtml_branch_coverage=1 00:12:42.691 --rc genhtml_function_coverage=1 00:12:42.691 --rc genhtml_legend=1 00:12:42.691 --rc geninfo_all_blocks=1 00:12:42.691 --rc geninfo_unexecuted_blocks=1 00:12:42.691 00:12:42.691 ' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:42.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.691 --rc genhtml_branch_coverage=1 00:12:42.691 --rc genhtml_function_coverage=1 00:12:42.691 --rc genhtml_legend=1 00:12:42.691 --rc geninfo_all_blocks=1 00:12:42.691 --rc geninfo_unexecuted_blocks=1 00:12:42.691 00:12:42.691 ' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:42.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.691 --rc genhtml_branch_coverage=1 00:12:42.691 --rc genhtml_function_coverage=1 00:12:42.691 --rc genhtml_legend=1 00:12:42.691 --rc geninfo_all_blocks=1 00:12:42.691 --rc geninfo_unexecuted_blocks=1 00:12:42.691 00:12:42.691 ' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:42.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.691 --rc genhtml_branch_coverage=1 00:12:42.691 --rc genhtml_function_coverage=1 00:12:42.691 --rc genhtml_legend=1 00:12:42.691 --rc geninfo_all_blocks=1 00:12:42.691 --rc geninfo_unexecuted_blocks=1 00:12:42.691 00:12:42.691 ' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.691 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.692 14:06:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.825 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:50.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:50.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:50.826 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:50.826 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.826 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:12:50.826 00:12:50.826 --- 10.0.0.2 ping statistics --- 00:12:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.827 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:12:50.827 00:12:50.827 --- 10.0.0.1 ping statistics --- 00:12:50.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.827 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2656477 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2656477 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2656477 ']' 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.827 14:06:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:50.827 [2024-12-06 14:06:38.665889] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:50.827 [2024-12-06 14:06:38.665955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.827 [2024-12-06 14:06:38.767286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.827 [2024-12-06 14:06:38.822284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.827 [2024-12-06 14:06:38.822340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.827 [2024-12-06 14:06:38.822349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.827 [2024-12-06 14:06:38.822357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.827 [2024-12-06 14:06:38.822364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.827 [2024-12-06 14:06:38.824735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.827 [2024-12-06 14:06:38.824890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.827 [2024-12-06 14:06:38.825055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.827 [2024-12-06 14:06:38.825055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.089 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 [2024-12-06 14:06:39.612048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 Malloc0 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:51.090 [2024-12-06 14:06:39.677516] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2656741 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2656743 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:51.090 { 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme$subsystem", 00:12:51.090 "trtype": "$TEST_TRANSPORT", 00:12:51.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "$NVMF_PORT", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.090 "hdgst": ${hdgst:-false}, 00:12:51.090 "ddgst": ${ddgst:-false} 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.090 } 00:12:51.090 EOF 00:12:51.090 )") 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2656745 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:51.090 { 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme$subsystem", 00:12:51.090 "trtype": "$TEST_TRANSPORT", 00:12:51.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "$NVMF_PORT", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.090 "hdgst": ${hdgst:-false}, 00:12:51.090 "ddgst": ${ddgst:-false} 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.090 } 00:12:51.090 EOF 00:12:51.090 )") 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2656748 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:51.090 { 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme$subsystem", 00:12:51.090 "trtype": "$TEST_TRANSPORT", 00:12:51.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "$NVMF_PORT", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.090 "hdgst": ${hdgst:-false}, 00:12:51.090 "ddgst": ${ddgst:-false} 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.090 } 00:12:51.090 EOF 00:12:51.090 )") 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:51.090 { 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme$subsystem", 00:12:51.090 "trtype": "$TEST_TRANSPORT", 00:12:51.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "$NVMF_PORT", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.090 "hdgst": ${hdgst:-false}, 00:12:51.090 "ddgst": ${ddgst:-false} 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.090 } 00:12:51.090 EOF 00:12:51.090 )") 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2656741 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme1", 00:12:51.090 "trtype": "tcp", 00:12:51.090 "traddr": "10.0.0.2", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "4420", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.090 "hdgst": false, 00:12:51.090 "ddgst": false 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.090 }' 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:51.090 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:51.090 "params": { 00:12:51.090 "name": "Nvme1", 00:12:51.090 "trtype": "tcp", 00:12:51.090 "traddr": "10.0.0.2", 00:12:51.090 "adrfam": "ipv4", 00:12:51.090 "trsvcid": "4420", 00:12:51.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.090 "hdgst": false, 00:12:51.090 "ddgst": false 00:12:51.090 }, 00:12:51.090 "method": "bdev_nvme_attach_controller" 00:12:51.091 }' 00:12:51.091 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:51.091 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:51.091 "params": { 00:12:51.091 "name": "Nvme1", 00:12:51.091 "trtype": "tcp", 00:12:51.091 "traddr": "10.0.0.2", 00:12:51.091 "adrfam": "ipv4", 00:12:51.091 "trsvcid": "4420", 00:12:51.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.091 "hdgst": false, 00:12:51.091 "ddgst": false 00:12:51.091 }, 00:12:51.091 "method": "bdev_nvme_attach_controller" 00:12:51.091 }' 00:12:51.091 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:51.091 14:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:51.091 "params": { 00:12:51.091 "name": "Nvme1", 00:12:51.091 "trtype": "tcp", 00:12:51.091 "traddr": "10.0.0.2", 00:12:51.091 "adrfam": "ipv4", 00:12:51.091 "trsvcid": "4420", 00:12:51.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.091 "hdgst": false, 00:12:51.091 "ddgst": false 00:12:51.091 }, 00:12:51.091 "method": "bdev_nvme_attach_controller" 00:12:51.091 }' 00:12:51.352 [2024-12-06 14:06:39.737466] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:51.352 [2024-12-06 14:06:39.737539] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:51.352 [2024-12-06 14:06:39.738746] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:51.352 [2024-12-06 14:06:39.738812] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:51.352 [2024-12-06 14:06:39.744410] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:51.352 [2024-12-06 14:06:39.744498] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:51.352 [2024-12-06 14:06:39.745792] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:12:51.352 [2024-12-06 14:06:39.745867] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:51.352 [2024-12-06 14:06:39.939578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.352 [2024-12-06 14:06:39.978195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:51.613 [2024-12-06 14:06:40.033033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.613 [2024-12-06 14:06:40.075441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:51.613 [2024-12-06 14:06:40.130297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.613 [2024-12-06 14:06:40.170800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:51.613 [2024-12-06 14:06:40.227382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.873 [2024-12-06 14:06:40.267972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:51.873 Running I/O for 1 seconds... 00:12:51.873 Running I/O for 1 seconds... 00:12:51.873 Running I/O for 1 seconds... 00:12:52.133 Running I/O for 1 seconds... 00:12:53.075 179960.00 IOPS, 702.97 MiB/s 00:12:53.075 Latency(us) 00:12:53.075 [2024-12-06T13:06:41.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.075 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:53.075 Nvme1n1 : 1.00 179603.23 701.58 0.00 0.00 708.83 298.67 1966.08 00:12:53.075 [2024-12-06T13:06:41.715Z] =================================================================================================================== 00:12:53.075 [2024-12-06T13:06:41.715Z] Total : 179603.23 701.58 0.00 0.00 708.83 298.67 1966.08 00:12:53.075 7063.00 IOPS, 27.59 MiB/s 00:12:53.075 Latency(us) 00:12:53.075 [2024-12-06T13:06:41.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.075 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:53.075 Nvme1n1 : 1.02 7073.53 27.63 0.00 0.00 17934.71 6362.45 25668.27 00:12:53.075 [2024-12-06T13:06:41.715Z] =================================================================================================================== 00:12:53.075 [2024-12-06T13:06:41.715Z] Total : 7073.53 27.63 0.00 0.00 17934.71 6362.45 25668.27 00:12:53.075 12502.00 IOPS, 48.84 MiB/s 00:12:53.075 Latency(us) 00:12:53.075 [2024-12-06T13:06:41.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.075 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:53.075 Nvme1n1 : 1.01 12556.90 49.05 0.00 0.00 10157.07 5297.49 20534.61 00:12:53.075 [2024-12-06T13:06:41.715Z] =================================================================================================================== 00:12:53.075 [2024-12-06T13:06:41.715Z] Total : 12556.90 49.05 0.00 0.00 10157.07 5297.49 20534.61 00:12:53.075 6931.00 IOPS, 27.07 MiB/s [2024-12-06T13:06:41.715Z] 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2656743 00:12:53.075 00:12:53.075 Latency(us) 00:12:53.075 [2024-12-06T13:06:41.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.075 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:53.075 Nvme1n1 : 1.01 7020.18 27.42 0.00 0.00 18181.15 4396.37 39758.51 00:12:53.075 [2024-12-06T13:06:41.715Z] =================================================================================================================== 00:12:53.075 [2024-12-06T13:06:41.715Z] Total : 7020.18 27.42 0.00 0.00 18181.15 4396.37 39758.51 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2656745 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2656748 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.075 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.075 rmmod nvme_tcp 00:12:53.075 rmmod nvme_fabrics 00:12:53.336 rmmod nvme_keyring 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2656477 ']' 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2656477 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2656477 ']' 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2656477 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656477 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.336 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656477' 00:12:53.337 killing process with pid 2656477 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2656477 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2656477 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.337 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.598 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.598 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.598 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.598 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.598 14:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.512 00:12:55.512 real 0m13.170s 00:12:55.512 user 0m20.153s 00:12:55.512 sys 0m7.326s 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 ************************************ 00:12:55.512 END TEST nvmf_bdev_io_wait 00:12:55.512 ************************************ 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 ************************************ 00:12:55.512 START TEST nvmf_queue_depth 00:12:55.512 ************************************ 00:12:55.512 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:55.774 * Looking for test storage... 00:12:55.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.774 --rc genhtml_branch_coverage=1 00:12:55.774 --rc genhtml_function_coverage=1 00:12:55.774 --rc genhtml_legend=1 00:12:55.774 --rc geninfo_all_blocks=1 00:12:55.774 --rc geninfo_unexecuted_blocks=1 00:12:55.774 00:12:55.774 ' 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.774 --rc genhtml_branch_coverage=1 00:12:55.774 --rc genhtml_function_coverage=1 00:12:55.774 --rc genhtml_legend=1 00:12:55.774 --rc geninfo_all_blocks=1 00:12:55.774 --rc geninfo_unexecuted_blocks=1 00:12:55.774 00:12:55.774 ' 00:12:55.774 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:55.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.775 --rc genhtml_branch_coverage=1 00:12:55.775 --rc genhtml_function_coverage=1 00:12:55.775 --rc genhtml_legend=1 00:12:55.775 --rc geninfo_all_blocks=1 00:12:55.775 --rc geninfo_unexecuted_blocks=1 00:12:55.775 00:12:55.775 ' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:55.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.775 --rc genhtml_branch_coverage=1 00:12:55.775 --rc genhtml_function_coverage=1 00:12:55.775 --rc genhtml_legend=1 00:12:55.775 --rc geninfo_all_blocks=1 00:12:55.775 --rc geninfo_unexecuted_blocks=1 00:12:55.775 00:12:55.775 ' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.775 14:06:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:03.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:03.968 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:03.968 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.968 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:03.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:13:03.969 00:13:03.969 --- 10.0.0.2 ping statistics --- 00:13:03.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.969 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:03.969 00:13:03.969 --- 10.0.0.1 ping statistics --- 00:13:03.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.969 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2661442 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2661442 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2661442 ']' 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.969 14:06:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:03.969 [2024-12-06 14:06:51.911286] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:03.969 [2024-12-06 14:06:51.911351] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.969 [2024-12-06 14:06:52.013622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.969 [2024-12-06 14:06:52.063942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.969 [2024-12-06 14:06:52.063989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.969 [2024-12-06 14:06:52.063997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.969 [2024-12-06 14:06:52.064005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.969 [2024-12-06 14:06:52.064011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.969 [2024-12-06 14:06:52.064815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.230 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.230 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 [2024-12-06 14:06:52.775931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 Malloc0 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 [2024-12-06 14:06:52.837196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2661636 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2661636 /var/tmp/bdevperf.sock 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2661636 ']' 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.231 14:06:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:04.492 [2024-12-06 14:06:52.896221] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:04.492 [2024-12-06 14:06:52.896281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661636 ] 00:13:04.493 [2024-12-06 14:06:52.987603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.493 [2024-12-06 14:06:53.041310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.434 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:05.435 NVMe0n1 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.435 14:06:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:05.435 Running I/O for 10 seconds... 00:13:07.394 8300.00 IOPS, 32.42 MiB/s [2024-12-06T13:06:57.416Z] 9848.50 IOPS, 38.47 MiB/s [2024-12-06T13:06:58.355Z] 10577.33 IOPS, 41.32 MiB/s [2024-12-06T13:06:59.292Z] 11058.25 IOPS, 43.20 MiB/s [2024-12-06T13:07:00.232Z] 11589.00 IOPS, 45.27 MiB/s [2024-12-06T13:07:01.171Z] 11939.33 IOPS, 46.64 MiB/s [2024-12-06T13:07:02.109Z] 12149.57 IOPS, 47.46 MiB/s [2024-12-06T13:07:03.046Z] 12349.62 IOPS, 48.24 MiB/s [2024-12-06T13:07:04.426Z] 12499.78 IOPS, 48.83 MiB/s [2024-12-06T13:07:04.426Z] 12587.00 IOPS, 49.17 MiB/s 00:13:15.786 Latency(us) 00:13:15.786 [2024-12-06T13:07:04.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.786 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:15.787 Verification LBA range: start 0x0 length 0x4000 00:13:15.787 NVMe0n1 : 10.05 12624.70 49.32 0.00 0.00 80843.13 16056.32 72089.60 00:13:15.787 [2024-12-06T13:07:04.427Z] =================================================================================================================== 00:13:15.787 [2024-12-06T13:07:04.427Z] Total : 12624.70 49.32 0.00 0.00 80843.13 16056.32 72089.60 00:13:15.787 { 00:13:15.787 "results": [ 00:13:15.787 { 00:13:15.787 "job": "NVMe0n1", 00:13:15.787 "core_mask": "0x1", 00:13:15.787 "workload": "verify", 00:13:15.787 "status": "finished", 00:13:15.787 "verify_range": { 00:13:15.787 "start": 0, 00:13:15.787 "length": 16384 00:13:15.787 }, 00:13:15.787 "queue_depth": 1024, 00:13:15.787 "io_size": 4096, 00:13:15.787 "runtime": 10.050695, 00:13:15.787 "iops": 12624.699087973519, 00:13:15.787 "mibps": 49.31523081239656, 00:13:15.787 "io_failed": 0, 00:13:15.787 "io_timeout": 0, 00:13:15.787 "avg_latency_us": 80843.1251970651, 00:13:15.787 "min_latency_us": 16056.32, 00:13:15.787 "max_latency_us": 72089.6 00:13:15.787 } 00:13:15.787 ], 00:13:15.787 "core_count": 1 00:13:15.787 } 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2661636 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2661636 ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2661636 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2661636 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2661636' 00:13:15.787 killing process with pid 2661636 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2661636 00:13:15.787 Received shutdown signal, test time was about 10.000000 seconds 00:13:15.787 00:13:15.787 Latency(us) 00:13:15.787 [2024-12-06T13:07:04.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.787 [2024-12-06T13:07:04.427Z] =================================================================================================================== 00:13:15.787 [2024-12-06T13:07:04.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2661636 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.787 rmmod nvme_tcp 00:13:15.787 rmmod nvme_fabrics 00:13:15.787 rmmod nvme_keyring 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2661442 ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2661442 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2661442 ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2661442 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2661442 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2661442' 00:13:15.787 killing process with pid 2661442 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2661442 00:13:15.787 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2661442 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.047 14:07:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.588 00:13:18.588 real 0m22.476s 00:13:18.588 user 0m25.842s 00:13:18.588 sys 0m7.011s 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:18.588 ************************************ 00:13:18.588 END TEST nvmf_queue_depth 00:13:18.588 ************************************ 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:18.588 ************************************ 00:13:18.588 START TEST nvmf_target_multipath 00:13:18.588 ************************************ 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:18.588 * Looking for test storage... 00:13:18.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.588 --rc genhtml_branch_coverage=1 00:13:18.588 --rc genhtml_function_coverage=1 00:13:18.588 --rc genhtml_legend=1 00:13:18.588 --rc geninfo_all_blocks=1 00:13:18.588 --rc geninfo_unexecuted_blocks=1 00:13:18.588 00:13:18.588 ' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.588 --rc genhtml_branch_coverage=1 00:13:18.588 --rc genhtml_function_coverage=1 00:13:18.588 --rc genhtml_legend=1 00:13:18.588 --rc geninfo_all_blocks=1 00:13:18.588 --rc geninfo_unexecuted_blocks=1 00:13:18.588 00:13:18.588 ' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.588 --rc genhtml_branch_coverage=1 00:13:18.588 --rc genhtml_function_coverage=1 00:13:18.588 --rc genhtml_legend=1 00:13:18.588 --rc geninfo_all_blocks=1 00:13:18.588 --rc geninfo_unexecuted_blocks=1 00:13:18.588 00:13:18.588 ' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.588 --rc genhtml_branch_coverage=1 00:13:18.588 --rc genhtml_function_coverage=1 00:13:18.588 --rc genhtml_legend=1 00:13:18.588 --rc geninfo_all_blocks=1 00:13:18.588 --rc geninfo_unexecuted_blocks=1 00:13:18.588 00:13:18.588 ' 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.588 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.589 14:07:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:26.720 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:26.720 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:26.720 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:26.720 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.720 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:13:26.721 00:13:26.721 --- 10.0.0.2 ping statistics --- 00:13:26.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.721 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:13:26.721 00:13:26.721 --- 10.0.0.1 ping statistics --- 00:13:26.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.721 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:26.721 only one NIC for nvmf test 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.721 rmmod nvme_tcp 00:13:26.721 rmmod nvme_fabrics 00:13:26.721 rmmod nvme_keyring 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.721 14:07:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:28.105 00:13:28.105 real 0m9.898s 00:13:28.105 user 0m2.132s 00:13:28.105 sys 0m5.722s 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:28.105 ************************************ 00:13:28.105 END TEST nvmf_target_multipath 00:13:28.105 ************************************ 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:28.105 ************************************ 00:13:28.105 START TEST nvmf_zcopy 00:13:28.105 ************************************ 00:13:28.105 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:28.367 * Looking for test storage... 00:13:28.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:28.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.367 --rc genhtml_branch_coverage=1 00:13:28.367 --rc genhtml_function_coverage=1 00:13:28.367 --rc genhtml_legend=1 00:13:28.367 --rc geninfo_all_blocks=1 00:13:28.367 --rc geninfo_unexecuted_blocks=1 00:13:28.367 00:13:28.367 ' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:28.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.367 --rc genhtml_branch_coverage=1 00:13:28.367 --rc genhtml_function_coverage=1 00:13:28.367 --rc genhtml_legend=1 00:13:28.367 --rc geninfo_all_blocks=1 00:13:28.367 --rc geninfo_unexecuted_blocks=1 00:13:28.367 00:13:28.367 ' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:28.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.367 --rc genhtml_branch_coverage=1 00:13:28.367 --rc genhtml_function_coverage=1 00:13:28.367 --rc genhtml_legend=1 00:13:28.367 --rc geninfo_all_blocks=1 00:13:28.367 --rc geninfo_unexecuted_blocks=1 00:13:28.367 00:13:28.367 ' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:28.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.367 --rc genhtml_branch_coverage=1 00:13:28.367 --rc genhtml_function_coverage=1 00:13:28.367 --rc genhtml_legend=1 00:13:28.367 --rc geninfo_all_blocks=1 00:13:28.367 --rc geninfo_unexecuted_blocks=1 00:13:28.367 00:13:28.367 ' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.367 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.368 14:07:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:36.511 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:36.511 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:36.511 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:36.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:36.512 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:13:36.512 00:13:36.512 --- 10.0.0.2 ping statistics --- 00:13:36.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.512 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:13:36.512 00:13:36.512 --- 10.0.0.1 ping statistics --- 00:13:36.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.512 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2672458 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2672458 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2672458 ']' 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.512 14:07:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.512 [2024-12-06 14:07:24.466518] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:36.512 [2024-12-06 14:07:24.466582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.512 [2024-12-06 14:07:24.566351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.512 [2024-12-06 14:07:24.616258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.512 [2024-12-06 14:07:24.616309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.512 [2024-12-06 14:07:24.616318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.512 [2024-12-06 14:07:24.616325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.513 [2024-12-06 14:07:24.616331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.513 [2024-12-06 14:07:24.617081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.772 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 [2024-12-06 14:07:25.329937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 [2024-12-06 14:07:25.354219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 malloc0 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:36.773 { 00:13:36.773 "params": { 00:13:36.773 "name": "Nvme$subsystem", 00:13:36.773 "trtype": "$TEST_TRANSPORT", 00:13:36.773 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:36.773 "adrfam": "ipv4", 00:13:36.773 "trsvcid": "$NVMF_PORT", 00:13:36.773 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:36.773 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:36.773 "hdgst": ${hdgst:-false}, 00:13:36.773 "ddgst": ${ddgst:-false} 00:13:36.773 }, 00:13:36.773 "method": "bdev_nvme_attach_controller" 00:13:36.773 } 00:13:36.773 EOF 00:13:36.773 )") 00:13:36.773 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:37.032 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:37.032 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:37.032 14:07:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:37.032 "params": { 00:13:37.032 "name": "Nvme1", 00:13:37.032 "trtype": "tcp", 00:13:37.032 "traddr": "10.0.0.2", 00:13:37.032 "adrfam": "ipv4", 00:13:37.032 "trsvcid": "4420", 00:13:37.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.032 "hdgst": false, 00:13:37.032 "ddgst": false 00:13:37.032 }, 00:13:37.032 "method": "bdev_nvme_attach_controller" 00:13:37.032 }' 00:13:37.032 [2024-12-06 14:07:25.457843] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:37.032 [2024-12-06 14:07:25.457912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672528 ] 00:13:37.032 [2024-12-06 14:07:25.552982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.032 [2024-12-06 14:07:25.606550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.291 Running I/O for 10 seconds... 00:13:39.619 6524.00 IOPS, 50.97 MiB/s [2024-12-06T13:07:29.201Z] 6578.00 IOPS, 51.39 MiB/s [2024-12-06T13:07:30.144Z] 7344.00 IOPS, 57.38 MiB/s [2024-12-06T13:07:31.085Z] 7969.25 IOPS, 62.26 MiB/s [2024-12-06T13:07:32.027Z] 8346.20 IOPS, 65.20 MiB/s [2024-12-06T13:07:33.098Z] 8593.00 IOPS, 67.13 MiB/s [2024-12-06T13:07:34.039Z] 8770.14 IOPS, 68.52 MiB/s [2024-12-06T13:07:34.979Z] 8905.62 IOPS, 69.58 MiB/s [2024-12-06T13:07:35.922Z] 9008.67 IOPS, 70.38 MiB/s [2024-12-06T13:07:35.922Z] 9092.30 IOPS, 71.03 MiB/s 00:13:47.282 Latency(us) 00:13:47.282 [2024-12-06T13:07:35.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.282 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:47.282 Verification LBA range: start 0x0 length 0x1000 00:13:47.282 Nvme1n1 : 10.01 9095.71 71.06 0.00 0.00 14027.70 2034.35 28398.93 00:13:47.282 [2024-12-06T13:07:35.922Z] =================================================================================================================== 00:13:47.282 [2024-12-06T13:07:35.922Z] Total : 9095.71 71.06 0.00 0.00 14027.70 2034.35 28398.93 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2674631 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:47.543 { 00:13:47.543 "params": { 00:13:47.543 "name": "Nvme$subsystem", 00:13:47.543 "trtype": "$TEST_TRANSPORT", 00:13:47.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.543 "adrfam": "ipv4", 00:13:47.543 "trsvcid": "$NVMF_PORT", 00:13:47.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.543 "hdgst": ${hdgst:-false}, 00:13:47.543 "ddgst": ${ddgst:-false} 00:13:47.543 }, 00:13:47.543 "method": "bdev_nvme_attach_controller" 00:13:47.543 } 00:13:47.543 EOF 00:13:47.543 )") 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:47.543 [2024-12-06 14:07:35.997918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.543 [2024-12-06 14:07:35.997945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.543 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:47.543 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:47.543 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:47.543 "params": { 00:13:47.543 "name": "Nvme1", 00:13:47.543 "trtype": "tcp", 00:13:47.543 "traddr": "10.0.0.2", 00:13:47.543 "adrfam": "ipv4", 00:13:47.543 "trsvcid": "4420", 00:13:47.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.543 "hdgst": false, 00:13:47.543 "ddgst": false 00:13:47.543 }, 00:13:47.543 "method": "bdev_nvme_attach_controller" 00:13:47.543 }' 00:13:47.544 [2024-12-06 14:07:36.009921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.009930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.021951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.021958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.033982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.033990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.046013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.046022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.049050] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:13:47.544 [2024-12-06 14:07:36.049106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674631 ] 00:13:47.544 [2024-12-06 14:07:36.058044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.058052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.070075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.070083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.082108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.082116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.094139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.094147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.106170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.106178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.118202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.118210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.130232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.130240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.135093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.544 [2024-12-06 14:07:36.142262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.142271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.154292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.154302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.164527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.544 [2024-12-06 14:07:36.166324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.166332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.544 [2024-12-06 14:07:36.178358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.544 [2024-12-06 14:07:36.178367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.190389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.190401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.202415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.202426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.214447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.214460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.226479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.226487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.238521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.238537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.250540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.250552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.262571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.262581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.804 [2024-12-06 14:07:36.274601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.804 [2024-12-06 14:07:36.274609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.286631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.286638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.298662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.298670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.310694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.310703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.322731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.322742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.334756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.334764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.346792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.346808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 Running I/O for 5 seconds... 00:13:47.805 [2024-12-06 14:07:36.360703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.360718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.374281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.374297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.387505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.387521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.400655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.400671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.414133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.414149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.427169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.427185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.805 [2024-12-06 14:07:36.439773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:47.805 [2024-12-06 14:07:36.439789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.453337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.453353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.466763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.466778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.480173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.480187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.492780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.492794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.505237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.505252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.518480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.518495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.531990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.532005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.545638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.545653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.558323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.558338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.571053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.571068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.584152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.584166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.596633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.596648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.609751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.609765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.622227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.622241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.635422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.635436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.647572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.647590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.661084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.661099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.673920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.673934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.686963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.686978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.066 [2024-12-06 14:07:36.700199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.066 [2024-12-06 14:07:36.700214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.713443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.713463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.726739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.726754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.739952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.739966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.753406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.753420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.766521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.766535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.780161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.780175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.792920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.792934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.806258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.806272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.818894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.818908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.832347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.832362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.845538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.845552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.858644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.858658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.871248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.871262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.883967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.883981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.897675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.897697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.910097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.910112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.923069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.923083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.936676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.936690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.389 [2024-12-06 14:07:36.949668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.389 [2024-12-06 14:07:36.949683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.390 [2024-12-06 14:07:36.962961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.390 [2024-12-06 14:07:36.962976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.390 [2024-12-06 14:07:36.975816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.390 [2024-12-06 14:07:36.975830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.390 [2024-12-06 14:07:36.988240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.390 [2024-12-06 14:07:36.988254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.390 [2024-12-06 14:07:37.001619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.390 [2024-12-06 14:07:37.001633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.390 [2024-12-06 14:07:37.014633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.390 [2024-12-06 14:07:37.014648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.027846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.027860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.041165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.041179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.054516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.054530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.068076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.068090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.081310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.081324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.650 [2024-12-06 14:07:37.094714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.650 [2024-12-06 14:07:37.094729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.107369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.107383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.120230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.120245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.133658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.133672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.146762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.146781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.160227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.160241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.172717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.172731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.185053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.185067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.197653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.197668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.210781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.210796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.224566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.224581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.237233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.237248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.250187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.250202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.262839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.262854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.651 [2024-12-06 14:07:37.275149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.651 [2024-12-06 14:07:37.275164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.288851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.288866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.301409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.301424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.314600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.314614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.327676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.327691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.340278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.340292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.352280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.352294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 19238.00 IOPS, 150.30 MiB/s [2024-12-06T13:07:37.550Z] [2024-12-06 14:07:37.366098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.366113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.378385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.378399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.391997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.910 [2024-12-06 14:07:37.392012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.910 [2024-12-06 14:07:37.404798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.404812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.416765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.416780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.429187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.429201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.442972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.442986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.454962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.454976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.468457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.468472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.481525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.481539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.494281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.494295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.506820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.506835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.519395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.519410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.532586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.532601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:48.911 [2024-12-06 14:07:37.545183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:48.911 [2024-12-06 14:07:37.545198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.558014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.170 [2024-12-06 14:07:37.558029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.571556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.170 [2024-12-06 14:07:37.571571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.584499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.170 [2024-12-06 14:07:37.584514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.597061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.170 [2024-12-06 14:07:37.597076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.610722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.170 [2024-12-06 14:07:37.610736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.170 [2024-12-06 14:07:37.623452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.623472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.637118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.637133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.650443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.650463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.663543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.663558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.676540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.676555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.688979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.688994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.702252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.702266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.714981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.714996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.727591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.727606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.740083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.740098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.753172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.753188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.765415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.765430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.778263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.778278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.791295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.791310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.171 [2024-12-06 14:07:37.804559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.171 [2024-12-06 14:07:37.804574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.817776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.817791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.830643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.830658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.843889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.843904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.856564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.856578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.869610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.869625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.883205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.883219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.896032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.896047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.909516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.909531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.922789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.922805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.935480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.935495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.949038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.949053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.961310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.961325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.974286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.974301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:37.987755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:37.987770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.000709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.000725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.013717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.013732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.027078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.027092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.040521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.040535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.053538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.053553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.430 [2024-12-06 14:07:38.066542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.430 [2024-12-06 14:07:38.066557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.690 [2024-12-06 14:07:38.079344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.690 [2024-12-06 14:07:38.079359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.690 [2024-12-06 14:07:38.092359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.690 [2024-12-06 14:07:38.092373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.690 [2024-12-06 14:07:38.105552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.690 [2024-12-06 14:07:38.105567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.690 [2024-12-06 14:07:38.119215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.690 [2024-12-06 14:07:38.119230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.690 [2024-12-06 14:07:38.132810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.132824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.145578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.145593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.158799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.158814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.171231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.171246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.183893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.183908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.196973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.196987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.210438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.210457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.223776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.223791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.237402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.237416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.250068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.250082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.262480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.262494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.276195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.276210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.289625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.289640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.302543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.302558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.691 [2024-12-06 14:07:38.315762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.691 [2024-12-06 14:07:38.315777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.950 [2024-12-06 14:07:38.328997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.950 [2024-12-06 14:07:38.329012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.950 [2024-12-06 14:07:38.342448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.950 [2024-12-06 14:07:38.342467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.950 [2024-12-06 14:07:38.355516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.950 [2024-12-06 14:07:38.355530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.950 19343.00 IOPS, 151.12 MiB/s [2024-12-06T13:07:38.591Z] [2024-12-06 14:07:38.368869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.368887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.382246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.382260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.395942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.395957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.408459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.408473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.421331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.421345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.434080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.434094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.447657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.447671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.461347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.461362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.474708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.474722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.487832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.487846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.501112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.501127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.514466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.514481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.527934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.527949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.540398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.540413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.554192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.554206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.567420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.567434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.951 [2024-12-06 14:07:38.580960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:49.951 [2024-12-06 14:07:38.580974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.594207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.594222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.607339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.607354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.620382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.620401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.633683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.633698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.647233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.647247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.659629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.659643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.672844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.672859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.685548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.685563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.698735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.698749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.711797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.711812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.725223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.211 [2024-12-06 14:07:38.725238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.211 [2024-12-06 14:07:38.738305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.738320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.750980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.750995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.764608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.764623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.777429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.777446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.790438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.790458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.803353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.803367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.816176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.816190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.828919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.828933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.212 [2024-12-06 14:07:38.842353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.212 [2024-12-06 14:07:38.842367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.855254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.855269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.868173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.868192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.880707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.880722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.893875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.893890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.906592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.906607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.919551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.919566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.932128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.932143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.945762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.945777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.959203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.959218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.971839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.971854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.985541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.985555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:38.998126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:38.998141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.010924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.010938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.023953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.023967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.036760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.036774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.050110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.050124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.063572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.063588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.076249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.076264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.089464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.089478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.472 [2024-12-06 14:07:39.102387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.472 [2024-12-06 14:07:39.102401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.115776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.115792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.128818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.128832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.142157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.142172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.155632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.155647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.168479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.168494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.181522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.181538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.194983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.194998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.208213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.208228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.220693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.220708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.233922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.233937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.247417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.247433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.260252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.260267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.272914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.272929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.286193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.286208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.299525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.299539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.312761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.312777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.325361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.325376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.338060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.338075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 [2024-12-06 14:07:39.351162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.732 [2024-12-06 14:07:39.351177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.732 19381.67 IOPS, 151.42 MiB/s [2024-12-06T13:07:39.373Z] [2024-12-06 14:07:39.364470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.733 [2024-12-06 14:07:39.364485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.377029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.377044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.389267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.389281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.402852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.402867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.416048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.416063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.429333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.429349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.441707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.441722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.454938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.454953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.467488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.467503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.480719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.480734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.494322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.494337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.507536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.507550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.520591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.520606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.533955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.533970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.547367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.547382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.560792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.560807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.574424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.574439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.587384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.587398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.600679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.600697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.613920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.613935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:50.993 [2024-12-06 14:07:39.626671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:50.993 [2024-12-06 14:07:39.626686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.639335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.639350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.651961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.651975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.664618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.664633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.678015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.678030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.691276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.691291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.704024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.704039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.717439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.717459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.730790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.730805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.743591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.743606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.757295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.757310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.769865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.769880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.783013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.783028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.796274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.796289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.809210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.809226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.822046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.822061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.835492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.835507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.847784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.847803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.860715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.860730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.873409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.873423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.255 [2024-12-06 14:07:39.885886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.255 [2024-12-06 14:07:39.885900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.898953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.898968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.912424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.912438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.925759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.925773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.938725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.938739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.951847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.951861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.964584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.964598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.976891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.976905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:39.989977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.515 [2024-12-06 14:07:39.989992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.515 [2024-12-06 14:07:40.003869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.003885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.016192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.016208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.029779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.029794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.043240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.043255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.056801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.056815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.069180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.069195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.082448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.082468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.095826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.095846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.108615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.108629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.121939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.121955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.134855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.134870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.516 [2024-12-06 14:07:40.147775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.516 [2024-12-06 14:07:40.147789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.161165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.161180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.174764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.174779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.188114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.188129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.200747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.200761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.213834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.213848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.227193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.227207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.240808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.240823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.253959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.253973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.266840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.266854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.280259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.280274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.292933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.292948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.306221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.306235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.319774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.319789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.332624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.332638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.345875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.345894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.359411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.359425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 19390.75 IOPS, 151.49 MiB/s [2024-12-06T13:07:40.417Z] [2024-12-06 14:07:40.372481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.372496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.385870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.385885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.398969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.398984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:51.777 [2024-12-06 14:07:40.411995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:51.777 [2024-12-06 14:07:40.412009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.424949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.424964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.437344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.437358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.450548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.450563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.462964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.462978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.476499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.476513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.489013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.489028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.502527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.502542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.515851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.515865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.529258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.529273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.542733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.542747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.556187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.556201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.569646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.569661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.583166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.583181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.595543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.595558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.608315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.608330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.621223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.621238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.634176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.634191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.647036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.647051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.660026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.660041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.038 [2024-12-06 14:07:40.673354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.038 [2024-12-06 14:07:40.673369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.687076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.687091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.699485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.699499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.712470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.712484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.724939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.724954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.737520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.737534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.751153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.751168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.763903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.763918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.776791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.776806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.789770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.789784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.803001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.803015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.816125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.816140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.829493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.829508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.842208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.842224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.855341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.855357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.868429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.868444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.881651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.881667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.895156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.895170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.908755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.908770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.921350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.921365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.300 [2024-12-06 14:07:40.933702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.300 [2024-12-06 14:07:40.933717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:40.946721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:40.946736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:40.959327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:40.959342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:40.972626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:40.972641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:40.985821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:40.985836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:40.999221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:40.999236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.012378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.012393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.025552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.025567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.039089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.039104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.051580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.051594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.064536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.064550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.078009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.078024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.091394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.091409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.104703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.104718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.117111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.117127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.130249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.130264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.143557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.143572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.157156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.157171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.169954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.169969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.183589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.183604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.561 [2024-12-06 14:07:41.196093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.561 [2024-12-06 14:07:41.196108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.209078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.209093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.221960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.221975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.235268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.235283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.248406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.248421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.261332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.261347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.274639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.274654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.287654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.287669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.300068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.300083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.312801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.312815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.325954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.325973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.339133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.339148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.352414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.352429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 19397.80 IOPS, 151.55 MiB/s [2024-12-06T13:07:41.462Z] [2024-12-06 14:07:41.365368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.365383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 00:13:52.822 Latency(us) 00:13:52.822 [2024-12-06T13:07:41.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.822 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:52.822 Nvme1n1 : 5.01 19400.29 151.56 0.00 0.00 6591.94 2676.05 15619.41 00:13:52.822 [2024-12-06T13:07:41.462Z] =================================================================================================================== 00:13:52.822 [2024-12-06T13:07:41.462Z] Total : 19400.29 151.56 0.00 0.00 6591.94 2676.05 15619.41 00:13:52.822 [2024-12-06 14:07:41.374583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.374597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.386613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.386626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.398643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.398655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.410672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.410683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.422702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.422712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.434732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.434742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.446762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.446770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.822 [2024-12-06 14:07:41.458793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:52.822 [2024-12-06 14:07:41.458803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.083 [2024-12-06 14:07:41.470822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:53.083 [2024-12-06 14:07:41.470831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2674631) - No such process 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2674631 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 delay0 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.083 14:07:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:53.083 [2024-12-06 14:07:41.603154] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:01.227 Initializing NVMe Controllers 00:14:01.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.227 Initialization complete. Launching workers. 00:14:01.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 24463 00:14:01.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24609, failed to submit 116 00:14:01.227 success 24515, unsuccessful 94, failed 0 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.227 rmmod nvme_tcp 00:14:01.227 rmmod nvme_fabrics 00:14:01.227 rmmod nvme_keyring 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2672458 ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2672458 ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672458' 00:14:01.227 killing process with pid 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2672458 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.227 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.610 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:02.610 00:14:02.610 real 0m34.292s 00:14:02.610 user 0m45.057s 00:14:02.610 sys 0m11.912s 00:14:02.610 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.610 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.610 ************************************ 00:14:02.610 END TEST nvmf_zcopy 00:14:02.610 ************************************ 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:02.611 ************************************ 00:14:02.611 START TEST nvmf_nmic 00:14:02.611 ************************************ 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:02.611 * Looking for test storage... 00:14:02.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.611 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.872 --rc genhtml_branch_coverage=1 00:14:02.872 --rc genhtml_function_coverage=1 00:14:02.872 --rc genhtml_legend=1 00:14:02.872 --rc geninfo_all_blocks=1 00:14:02.872 --rc geninfo_unexecuted_blocks=1 00:14:02.872 00:14:02.872 ' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.872 --rc genhtml_branch_coverage=1 00:14:02.872 --rc genhtml_function_coverage=1 00:14:02.872 --rc genhtml_legend=1 00:14:02.872 --rc geninfo_all_blocks=1 00:14:02.872 --rc geninfo_unexecuted_blocks=1 00:14:02.872 00:14:02.872 ' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.872 --rc genhtml_branch_coverage=1 00:14:02.872 --rc genhtml_function_coverage=1 00:14:02.872 --rc genhtml_legend=1 00:14:02.872 --rc geninfo_all_blocks=1 00:14:02.872 --rc geninfo_unexecuted_blocks=1 00:14:02.872 00:14:02.872 ' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.872 --rc genhtml_branch_coverage=1 00:14:02.872 --rc genhtml_function_coverage=1 00:14:02.872 --rc genhtml_legend=1 00:14:02.872 --rc geninfo_all_blocks=1 00:14:02.872 --rc geninfo_unexecuted_blocks=1 00:14:02.872 00:14:02.872 ' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.872 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.873 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:11.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:11.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.026 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:11.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:11.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:14:11.027 00:14:11.027 --- 10.0.0.2 ping statistics --- 00:14:11.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.027 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:14:11.027 00:14:11.027 --- 10.0.0.1 ping statistics --- 00:14:11.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.027 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2681496 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2681496 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2681496 ']' 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.027 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.027 [2024-12-06 14:07:58.873134] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:11.027 [2024-12-06 14:07:58.873197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.027 [2024-12-06 14:07:58.971228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.027 [2024-12-06 14:07:59.025719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.027 [2024-12-06 14:07:59.025770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.027 [2024-12-06 14:07:59.025778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.027 [2024-12-06 14:07:59.025785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.027 [2024-12-06 14:07:59.025792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.027 [2024-12-06 14:07:59.027823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.027 [2024-12-06 14:07:59.028085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.027 [2024-12-06 14:07:59.028246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.027 [2024-12-06 14:07:59.028247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.288 [2024-12-06 14:07:59.750019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.288 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 Malloc0 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 [2024-12-06 14:07:59.834413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:11.289 test case1: single bdev can't be used in multiple subsystems 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 [2024-12-06 14:07:59.870287] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:11.289 [2024-12-06 14:07:59.870318] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:11.289 [2024-12-06 14:07:59.870326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.289 request: 00:14:11.289 { 00:14:11.289 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:11.289 "namespace": { 00:14:11.289 "bdev_name": "Malloc0", 00:14:11.289 "no_auto_visible": false, 00:14:11.289 "hide_metadata": false 00:14:11.289 }, 00:14:11.289 "method": "nvmf_subsystem_add_ns", 00:14:11.289 "req_id": 1 00:14:11.289 } 00:14:11.289 Got JSON-RPC error response 00:14:11.289 response: 00:14:11.289 { 00:14:11.289 "code": -32602, 00:14:11.289 "message": "Invalid parameters" 00:14:11.289 } 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:11.289 Adding namespace failed - expected result. 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:11.289 test case2: host connect to nvmf target in multiple paths 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:11.289 [2024-12-06 14:07:59.882508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.289 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.201 14:08:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:14.580 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.580 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:14.580 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.580 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:14.580 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:16.491 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:16.491 [global] 00:14:16.491 thread=1 00:14:16.491 invalidate=1 00:14:16.491 rw=write 00:14:16.491 time_based=1 00:14:16.491 runtime=1 00:14:16.491 ioengine=libaio 00:14:16.491 direct=1 00:14:16.491 bs=4096 00:14:16.491 iodepth=1 00:14:16.491 norandommap=0 00:14:16.491 numjobs=1 00:14:16.491 00:14:16.491 verify_dump=1 00:14:16.491 verify_backlog=512 00:14:16.491 verify_state_save=0 00:14:16.491 do_verify=1 00:14:16.491 verify=crc32c-intel 00:14:16.491 [job0] 00:14:16.491 filename=/dev/nvme0n1 00:14:16.491 Could not set queue depth (nvme0n1) 00:14:16.751 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.751 fio-3.35 00:14:16.751 Starting 1 thread 00:14:18.137 00:14:18.137 job0: (groupid=0, jobs=1): err= 0: pid=2682800: Fri Dec 6 14:08:06 2024 00:14:18.137 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:18.137 slat (nsec): min=7328, max=60931, avg=27135.61, stdev=4656.27 00:14:18.137 clat (usec): min=521, max=1130, avg=894.40, stdev=111.46 00:14:18.137 lat (usec): min=548, max=1156, avg=921.54, stdev=112.00 00:14:18.137 clat percentiles (usec): 00:14:18.137 | 1.00th=[ 578], 5.00th=[ 668], 10.00th=[ 717], 20.00th=[ 799], 00:14:18.137 | 30.00th=[ 857], 40.00th=[ 898], 50.00th=[ 930], 60.00th=[ 955], 00:14:18.137 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1004], 95.00th=[ 1020], 00:14:18.137 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:14:18.137 | 99.99th=[ 1123] 00:14:18.137 write: IOPS=980, BW=3920KiB/s (4014kB/s)(3924KiB/1001msec); 0 zone resets 00:14:18.137 slat (usec): min=9, max=26142, avg=55.63, stdev=833.82 00:14:18.137 clat (usec): min=192, max=841, avg=472.04, stdev=116.10 00:14:18.137 lat (usec): min=202, max=26491, avg=527.68, stdev=838.62 00:14:18.137 clat percentiles (usec): 00:14:18.137 | 1.00th=[ 260], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 367], 00:14:18.137 | 30.00th=[ 416], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 474], 00:14:18.137 | 70.00th=[ 494], 80.00th=[ 586], 90.00th=[ 660], 95.00th=[ 693], 00:14:18.137 | 99.00th=[ 750], 99.50th=[ 791], 99.90th=[ 840], 99.95th=[ 840], 00:14:18.137 | 99.99th=[ 840] 00:14:18.137 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:18.137 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:18.137 lat (usec) : 250=0.40%, 500=46.62%, 750=22.71%, 1000=25.99% 00:14:18.137 lat (msec) : 2=4.29% 00:14:18.137 cpu : usr=2.10%, sys=4.30%, ctx=1497, majf=0, minf=1 00:14:18.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:18.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.137 issued rwts: total=512,981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:18.137 00:14:18.137 Run status group 0 (all jobs): 00:14:18.137 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:14:18.137 WRITE: bw=3920KiB/s (4014kB/s), 3920KiB/s-3920KiB/s (4014kB/s-4014kB/s), io=3924KiB (4018kB), run=1001-1001msec 00:14:18.137 00:14:18.137 Disk stats (read/write): 00:14:18.137 nvme0n1: ios=564/754, merge=0/0, ticks=863/358, in_queue=1221, util=98.70% 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.137 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.137 rmmod nvme_tcp 00:14:18.137 rmmod nvme_fabrics 00:14:18.138 rmmod nvme_keyring 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2681496 ']' 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2681496 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2681496 ']' 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2681496 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681496 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681496' 00:14:18.138 killing process with pid 2681496 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2681496 00:14:18.138 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2681496 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.398 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.310 00:14:20.310 real 0m17.820s 00:14:20.310 user 0m48.934s 00:14:20.310 sys 0m6.600s 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 ************************************ 00:14:20.310 END TEST nvmf_nmic 00:14:20.310 ************************************ 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.310 14:08:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 ************************************ 00:14:20.573 START TEST nvmf_fio_target 00:14:20.573 ************************************ 00:14:20.573 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:20.573 * Looking for test storage... 00:14:20.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.573 --rc genhtml_branch_coverage=1 00:14:20.573 --rc genhtml_function_coverage=1 00:14:20.573 --rc genhtml_legend=1 00:14:20.573 --rc geninfo_all_blocks=1 00:14:20.573 --rc geninfo_unexecuted_blocks=1 00:14:20.573 00:14:20.573 ' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.573 --rc genhtml_branch_coverage=1 00:14:20.573 --rc genhtml_function_coverage=1 00:14:20.573 --rc genhtml_legend=1 00:14:20.573 --rc geninfo_all_blocks=1 00:14:20.573 --rc geninfo_unexecuted_blocks=1 00:14:20.573 00:14:20.573 ' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.573 --rc genhtml_branch_coverage=1 00:14:20.573 --rc genhtml_function_coverage=1 00:14:20.573 --rc genhtml_legend=1 00:14:20.573 --rc geninfo_all_blocks=1 00:14:20.573 --rc geninfo_unexecuted_blocks=1 00:14:20.573 00:14:20.573 ' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:20.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.573 --rc genhtml_branch_coverage=1 00:14:20.573 --rc genhtml_function_coverage=1 00:14:20.573 --rc genhtml_legend=1 00:14:20.573 --rc geninfo_all_blocks=1 00:14:20.573 --rc geninfo_unexecuted_blocks=1 00:14:20.573 00:14:20.573 ' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.573 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.574 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:28.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:28.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:28.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:28.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:28.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:14:28.710 00:14:28.710 --- 10.0.0.2 ping statistics --- 00:14:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.710 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:14:28.710 00:14:28.710 --- 10.0.0.1 ping statistics --- 00:14:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.710 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2687452 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2687452 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2687452 ']' 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.710 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.711 [2024-12-06 14:08:16.717103] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:28.711 [2024-12-06 14:08:16.717167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.711 [2024-12-06 14:08:16.817292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.711 [2024-12-06 14:08:16.870240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.711 [2024-12-06 14:08:16.870293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.711 [2024-12-06 14:08:16.870302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.711 [2024-12-06 14:08:16.870309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.711 [2024-12-06 14:08:16.870316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.711 [2024-12-06 14:08:16.872414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.711 [2024-12-06 14:08:16.872572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.711 [2024-12-06 14:08:16.872859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.711 [2024-12-06 14:08:16.872863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.969 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:29.227 [2024-12-06 14:08:17.751196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.227 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.512 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:29.512 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.772 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:29.772 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.033 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:30.033 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.033 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:30.033 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:30.293 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.553 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:30.553 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.814 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:30.814 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.814 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:30.814 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:31.074 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:31.335 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:31.335 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.335 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:31.335 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.596 14:08:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.856 [2024-12-06 14:08:20.305866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.856 14:08:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:32.116 14:08:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:32.116 14:08:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:34.049 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:35.956 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:35.956 [global] 00:14:35.956 thread=1 00:14:35.956 invalidate=1 00:14:35.956 rw=write 00:14:35.956 time_based=1 00:14:35.956 runtime=1 00:14:35.956 ioengine=libaio 00:14:35.956 direct=1 00:14:35.956 bs=4096 00:14:35.956 iodepth=1 00:14:35.956 norandommap=0 00:14:35.956 numjobs=1 00:14:35.956 00:14:35.956 verify_dump=1 00:14:35.956 verify_backlog=512 00:14:35.956 verify_state_save=0 00:14:35.956 do_verify=1 00:14:35.956 verify=crc32c-intel 00:14:35.956 [job0] 00:14:35.956 filename=/dev/nvme0n1 00:14:35.956 [job1] 00:14:35.956 filename=/dev/nvme0n2 00:14:35.956 [job2] 00:14:35.956 filename=/dev/nvme0n3 00:14:35.956 [job3] 00:14:35.956 filename=/dev/nvme0n4 00:14:35.956 Could not set queue depth (nvme0n1) 00:14:35.956 Could not set queue depth (nvme0n2) 00:14:35.956 Could not set queue depth (nvme0n3) 00:14:35.956 Could not set queue depth (nvme0n4) 00:14:36.216 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.216 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.216 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.216 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.216 fio-3.35 00:14:36.216 Starting 4 threads 00:14:37.630 00:14:37.630 job0: (groupid=0, jobs=1): err= 0: pid=2689080: Fri Dec 6 14:08:25 2024 00:14:37.630 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:37.630 slat (nsec): min=4486, max=61903, avg=24687.53, stdev=8327.37 00:14:37.630 clat (usec): min=454, max=995, avg=751.76, stdev=99.06 00:14:37.630 lat (usec): min=462, max=1022, avg=776.44, stdev=100.83 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 515], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 652], 00:14:37.630 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 783], 00:14:37.630 | 70.00th=[ 816], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:14:37.630 | 99.00th=[ 938], 99.50th=[ 988], 99.90th=[ 996], 99.95th=[ 996], 00:14:37.630 | 99.99th=[ 996] 00:14:37.630 write: IOPS=997, BW=3988KiB/s (4084kB/s)(3992KiB/1001msec); 0 zone resets 00:14:37.630 slat (usec): min=5, max=28117, avg=54.01, stdev=889.37 00:14:37.630 clat (usec): min=158, max=802, avg=540.04, stdev=110.02 00:14:37.630 lat (usec): min=164, max=28634, avg=594.05, stdev=895.90 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 277], 5.00th=[ 355], 10.00th=[ 388], 20.00th=[ 445], 00:14:37.630 | 30.00th=[ 486], 40.00th=[ 519], 50.00th=[ 545], 60.00th=[ 578], 00:14:37.630 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:14:37.630 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 799], 99.95th=[ 799], 00:14:37.630 | 99.99th=[ 799] 00:14:37.630 bw ( KiB/s): min= 4096, max= 4096, per=37.77%, avg=4096.00, stdev= 0.00, samples=1 00:14:37.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:37.630 lat (usec) : 250=0.26%, 500=23.38%, 750=57.95%, 1000=18.41% 00:14:37.630 cpu : usr=2.00%, sys=3.60%, ctx=1513, majf=0, minf=1 00:14:37.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.630 issued rwts: total=512,998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.630 job1: (groupid=0, jobs=1): err= 0: pid=2689082: Fri Dec 6 14:08:25 2024 00:14:37.630 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:37.630 slat (nsec): min=26623, max=42084, avg=27227.45, stdev=906.59 00:14:37.630 clat (usec): min=728, max=1142, avg=966.31, stdev=53.41 00:14:37.630 lat (usec): min=756, max=1169, avg=993.54, stdev=53.30 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 930], 00:14:37.630 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:14:37.630 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:14:37.630 | 99.00th=[ 1090], 99.50th=[ 1090], 99.90th=[ 1139], 99.95th=[ 1139], 00:14:37.630 | 99.99th=[ 1139] 00:14:37.630 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:14:37.630 slat (nsec): min=9562, max=56519, avg=31608.09, stdev=9981.16 00:14:37.630 clat (usec): min=264, max=892, avg=606.90, stdev=105.18 00:14:37.630 lat (usec): min=274, max=928, avg=638.51, stdev=109.92 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 355], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 529], 00:14:37.630 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:14:37.630 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:14:37.630 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 898], 00:14:37.630 | 99.99th=[ 898] 00:14:37.630 bw ( KiB/s): min= 4096, max= 4096, per=37.77%, avg=4096.00, stdev= 0.00, samples=1 00:14:37.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:37.630 lat (usec) : 500=9.16%, 750=45.73%, 1000=35.47% 00:14:37.630 lat (msec) : 2=9.64% 00:14:37.630 cpu : usr=2.00%, sys=5.70%, ctx=1267, majf=0, minf=1 00:14:37.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.630 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.630 job2: (groupid=0, jobs=1): err= 0: pid=2689101: Fri Dec 6 14:08:25 2024 00:14:37.630 read: IOPS=145, BW=582KiB/s (596kB/s)(596KiB/1024msec) 00:14:37.630 slat (nsec): min=7408, max=43460, avg=25712.34, stdev=2617.55 00:14:37.630 clat (usec): min=671, max=42046, avg=4537.62, stdev=11570.15 00:14:37.630 lat (usec): min=697, max=42073, avg=4563.33, stdev=11570.58 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 906], 00:14:37.630 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:14:37.630 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1205], 95.00th=[41681], 00:14:37.630 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:37.630 | 99.99th=[42206] 00:14:37.630 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:14:37.630 slat (nsec): min=9773, max=54061, avg=29705.99, stdev=8954.25 00:14:37.630 clat (usec): min=304, max=903, avg=632.98, stdev=108.93 00:14:37.630 lat (usec): min=315, max=935, avg=662.68, stdev=112.42 00:14:37.630 clat percentiles (usec): 00:14:37.630 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 469], 20.00th=[ 562], 00:14:37.630 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:14:37.630 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 775], 00:14:37.631 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 906], 00:14:37.631 | 99.99th=[ 906] 00:14:37.631 bw ( KiB/s): min= 4096, max= 4096, per=37.77%, avg=4096.00, stdev= 0.00, samples=1 00:14:37.631 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:37.631 lat (usec) : 500=11.50%, 750=56.58%, 1000=21.79% 00:14:37.631 lat (msec) : 2=8.17%, 50=1.97% 00:14:37.631 cpu : usr=0.68%, sys=2.05%, ctx=661, majf=0, minf=2 00:14:37.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.631 issued rwts: total=149,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.631 job3: (groupid=0, jobs=1): err= 0: pid=2689108: Fri Dec 6 14:08:25 2024 00:14:37.631 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:14:37.631 slat (nsec): min=27047, max=27877, avg=27311.37, stdev=200.98 00:14:37.631 clat (usec): min=41004, max=42021, avg=41901.52, stdev=242.32 00:14:37.631 lat (usec): min=41032, max=42048, avg=41928.83, stdev=242.18 00:14:37.631 clat percentiles (usec): 00:14:37.631 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:14:37.631 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:37.631 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:37.631 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:37.631 | 99.99th=[42206] 00:14:37.631 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:14:37.631 slat (nsec): min=9793, max=64511, avg=30514.22, stdev=11039.42 00:14:37.631 clat (usec): min=181, max=939, avg=609.46, stdev=115.97 00:14:37.631 lat (usec): min=192, max=974, avg=639.98, stdev=121.86 00:14:37.631 clat percentiles (usec): 00:14:37.631 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 498], 00:14:37.631 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:14:37.631 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:14:37.631 | 99.00th=[ 840], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 938], 00:14:37.631 | 99.99th=[ 938] 00:14:37.631 bw ( KiB/s): min= 4096, max= 4096, per=37.77%, avg=4096.00, stdev= 0.00, samples=1 00:14:37.631 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:37.631 lat (usec) : 250=0.19%, 500=19.89%, 750=68.37%, 1000=8.52% 00:14:37.631 lat (msec) : 50=3.03% 00:14:37.631 cpu : usr=0.80%, sys=2.20%, ctx=530, majf=0, minf=1 00:14:37.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.631 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.631 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.631 00:14:37.631 Run status group 0 (all jobs): 00:14:37.631 READ: bw=4645KiB/s (4756kB/s), 63.9KiB/s-2046KiB/s (65.4kB/s-2095kB/s), io=4756KiB (4870kB), run=1001-1024msec 00:14:37.631 WRITE: bw=10.6MiB/s (11.1MB/s), 2000KiB/s-3988KiB/s (2048kB/s-4084kB/s), io=10.8MiB (11.4MB), run=1001-1024msec 00:14:37.631 00:14:37.631 Disk stats (read/write): 00:14:37.631 nvme0n1: ios=558/719, merge=0/0, ticks=613/356, in_queue=969, util=85.87% 00:14:37.631 nvme0n2: ios=515/512, merge=0/0, ticks=1361/250, in_queue=1611, util=87.84% 00:14:37.631 nvme0n3: ios=201/512, merge=0/0, ticks=574/318, in_queue=892, util=94.92% 00:14:37.631 nvme0n4: ios=36/512, merge=0/0, ticks=1407/250, in_queue=1657, util=94.00% 00:14:37.631 14:08:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:37.631 [global] 00:14:37.631 thread=1 00:14:37.631 invalidate=1 00:14:37.631 rw=randwrite 00:14:37.631 time_based=1 00:14:37.631 runtime=1 00:14:37.631 ioengine=libaio 00:14:37.631 direct=1 00:14:37.631 bs=4096 00:14:37.631 iodepth=1 00:14:37.631 norandommap=0 00:14:37.631 numjobs=1 00:14:37.631 00:14:37.631 verify_dump=1 00:14:37.631 verify_backlog=512 00:14:37.631 verify_state_save=0 00:14:37.631 do_verify=1 00:14:37.631 verify=crc32c-intel 00:14:37.631 [job0] 00:14:37.631 filename=/dev/nvme0n1 00:14:37.631 [job1] 00:14:37.631 filename=/dev/nvme0n2 00:14:37.631 [job2] 00:14:37.631 filename=/dev/nvme0n3 00:14:37.631 [job3] 00:14:37.631 filename=/dev/nvme0n4 00:14:37.631 Could not set queue depth (nvme0n1) 00:14:37.631 Could not set queue depth (nvme0n2) 00:14:37.631 Could not set queue depth (nvme0n3) 00:14:37.631 Could not set queue depth (nvme0n4) 00:14:37.894 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.894 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.894 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.894 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:37.894 fio-3.35 00:14:37.894 Starting 4 threads 00:14:39.328 00:14:39.328 job0: (groupid=0, jobs=1): err= 0: pid=2689602: Fri Dec 6 14:08:27 2024 00:14:39.328 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:39.328 slat (nsec): min=6926, max=45012, avg=27117.73, stdev=2101.54 00:14:39.328 clat (usec): min=405, max=1188, avg=958.89, stdev=139.09 00:14:39.328 lat (usec): min=433, max=1214, avg=986.01, stdev=138.93 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[ 570], 5.00th=[ 693], 10.00th=[ 734], 20.00th=[ 832], 00:14:39.328 | 30.00th=[ 930], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1029], 00:14:39.328 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:14:39.328 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:14:39.328 | 99.99th=[ 1188] 00:14:39.328 write: IOPS=632, BW=2529KiB/s (2590kB/s)(2532KiB/1001msec); 0 zone resets 00:14:39.328 slat (nsec): min=9208, max=54097, avg=31549.64, stdev=9364.87 00:14:39.328 clat (usec): min=234, max=1014, avg=735.87, stdev=118.93 00:14:39.328 lat (usec): min=244, max=1048, avg=767.42, stdev=122.56 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[ 388], 5.00th=[ 506], 10.00th=[ 586], 20.00th=[ 652], 00:14:39.328 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 775], 00:14:39.328 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 906], 00:14:39.328 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1012], 99.95th=[ 1012], 00:14:39.328 | 99.99th=[ 1012] 00:14:39.328 bw ( KiB/s): min= 4096, max= 4096, per=36.68%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.328 lat (usec) : 250=0.09%, 500=2.79%, 750=31.00%, 1000=41.31% 00:14:39.328 lat (msec) : 2=24.80% 00:14:39.328 cpu : usr=3.10%, sys=3.90%, ctx=1147, majf=0, minf=1 00:14:39.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 issued rwts: total=512,633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.328 job1: (groupid=0, jobs=1): err= 0: pid=2689604: Fri Dec 6 14:08:27 2024 00:14:39.328 read: IOPS=586, BW=2346KiB/s (2402kB/s)(2348KiB/1001msec) 00:14:39.328 slat (nsec): min=6756, max=60882, avg=24316.73, stdev=6920.60 00:14:39.328 clat (usec): min=371, max=1080, avg=733.84, stdev=114.49 00:14:39.328 lat (usec): min=397, max=1105, avg=758.16, stdev=115.92 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[ 461], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 627], 00:14:39.328 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 775], 00:14:39.328 | 70.00th=[ 807], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 906], 00:14:39.328 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1074], 99.95th=[ 1074], 00:14:39.328 | 99.99th=[ 1074] 00:14:39.328 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:39.328 slat (nsec): min=9270, max=90698, avg=29366.72, stdev=9228.78 00:14:39.328 clat (usec): min=113, max=841, avg=500.73, stdev=124.36 00:14:39.328 lat (usec): min=145, max=873, avg=530.10, stdev=127.31 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[ 396], 00:14:39.328 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 529], 00:14:39.328 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 701], 00:14:39.328 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 824], 99.95th=[ 840], 00:14:39.328 | 99.99th=[ 840] 00:14:39.328 bw ( KiB/s): min= 4096, max= 4096, per=36.68%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.328 lat (usec) : 250=0.93%, 500=30.73%, 750=50.53%, 1000=17.69% 00:14:39.328 lat (msec) : 2=0.12% 00:14:39.328 cpu : usr=2.10%, sys=4.80%, ctx=1612, majf=0, minf=2 00:14:39.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 issued rwts: total=587,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.328 job2: (groupid=0, jobs=1): err= 0: pid=2689610: Fri Dec 6 14:08:27 2024 00:14:39.328 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:14:39.328 slat (nsec): min=26516, max=27198, avg=26774.88, stdev=180.15 00:14:39.328 clat (usec): min=40895, max=42018, avg=41428.94, stdev=498.02 00:14:39.328 lat (usec): min=40922, max=42044, avg=41455.71, stdev=498.00 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:39.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:14:39.328 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:39.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:39.328 | 99.99th=[42206] 00:14:39.328 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:14:39.328 slat (nsec): min=9615, max=60241, avg=31460.09, stdev=7946.18 00:14:39.328 clat (usec): min=163, max=961, avg=579.92, stdev=149.73 00:14:39.328 lat (usec): min=197, max=998, avg=611.38, stdev=152.05 00:14:39.328 clat percentiles (usec): 00:14:39.328 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 355], 20.00th=[ 449], 00:14:39.328 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:14:39.328 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:14:39.328 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:14:39.328 | 99.99th=[ 963] 00:14:39.328 bw ( KiB/s): min= 4096, max= 4096, per=36.68%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.328 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.328 lat (usec) : 250=0.76%, 500=26.09%, 750=59.17%, 1000=10.78% 00:14:39.328 lat (msec) : 50=3.21% 00:14:39.328 cpu : usr=0.69%, sys=1.67%, ctx=531, majf=0, minf=1 00:14:39.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.328 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.328 job3: (groupid=0, jobs=1): err= 0: pid=2689613: Fri Dec 6 14:08:27 2024 00:14:39.328 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:39.328 slat (nsec): min=10013, max=59053, avg=25738.04, stdev=3457.96 00:14:39.329 clat (usec): min=741, max=1404, avg=1132.24, stdev=74.24 00:14:39.329 lat (usec): min=766, max=1429, avg=1157.98, stdev=74.26 00:14:39.329 clat percentiles (usec): 00:14:39.329 | 1.00th=[ 906], 5.00th=[ 1004], 10.00th=[ 1029], 20.00th=[ 1090], 00:14:39.329 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:14:39.329 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:14:39.329 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1401], 99.95th=[ 1401], 00:14:39.329 | 99.99th=[ 1401] 00:14:39.329 write: IOPS=683, BW=2733KiB/s (2799kB/s)(2736KiB/1001msec); 0 zone resets 00:14:39.329 slat (nsec): min=9355, max=49460, avg=22515.33, stdev=10466.61 00:14:39.329 clat (usec): min=186, max=965, avg=560.63, stdev=130.01 00:14:39.329 lat (usec): min=197, max=997, avg=583.15, stdev=133.51 00:14:39.329 clat percentiles (usec): 00:14:39.329 | 1.00th=[ 269], 5.00th=[ 355], 10.00th=[ 396], 20.00th=[ 449], 00:14:39.329 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:14:39.329 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:14:39.329 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:14:39.329 | 99.99th=[ 963] 00:14:39.329 bw ( KiB/s): min= 4096, max= 4096, per=36.68%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.329 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.329 lat (usec) : 250=0.25%, 500=19.31%, 750=34.11%, 1000=5.60% 00:14:39.329 lat (msec) : 2=40.72% 00:14:39.329 cpu : usr=1.90%, sys=2.60%, ctx=1196, majf=0, minf=1 00:14:39.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.329 issued rwts: total=512,684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.329 00:14:39.329 Run status group 0 (all jobs): 00:14:39.329 READ: bw=6372KiB/s (6525kB/s), 66.5KiB/s-2346KiB/s (68.1kB/s-2402kB/s), io=6512KiB (6668kB), run=1001-1022msec 00:14:39.329 WRITE: bw=10.9MiB/s (11.4MB/s), 2004KiB/s-4092KiB/s (2052kB/s-4190kB/s), io=11.1MiB (11.7MB), run=1001-1022msec 00:14:39.329 00:14:39.329 Disk stats (read/write): 00:14:39.329 nvme0n1: ios=472/512, merge=0/0, ticks=1381/307, in_queue=1688, util=96.69% 00:14:39.329 nvme0n2: ios=547/801, merge=0/0, ticks=404/368, in_queue=772, util=87.16% 00:14:39.329 nvme0n3: ios=46/512, merge=0/0, ticks=884/289, in_queue=1173, util=96.73% 00:14:39.329 nvme0n4: ios=485/512, merge=0/0, ticks=1132/280, in_queue=1412, util=94.76% 00:14:39.329 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:39.329 [global] 00:14:39.329 thread=1 00:14:39.329 invalidate=1 00:14:39.329 rw=write 00:14:39.329 time_based=1 00:14:39.329 runtime=1 00:14:39.329 ioengine=libaio 00:14:39.329 direct=1 00:14:39.329 bs=4096 00:14:39.329 iodepth=128 00:14:39.329 norandommap=0 00:14:39.329 numjobs=1 00:14:39.329 00:14:39.329 verify_dump=1 00:14:39.329 verify_backlog=512 00:14:39.329 verify_state_save=0 00:14:39.329 do_verify=1 00:14:39.329 verify=crc32c-intel 00:14:39.329 [job0] 00:14:39.329 filename=/dev/nvme0n1 00:14:39.329 [job1] 00:14:39.329 filename=/dev/nvme0n2 00:14:39.329 [job2] 00:14:39.329 filename=/dev/nvme0n3 00:14:39.329 [job3] 00:14:39.329 filename=/dev/nvme0n4 00:14:39.329 Could not set queue depth (nvme0n1) 00:14:39.329 Could not set queue depth (nvme0n2) 00:14:39.329 Could not set queue depth (nvme0n3) 00:14:39.329 Could not set queue depth (nvme0n4) 00:14:39.595 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.595 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.595 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.595 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.595 fio-3.35 00:14:39.595 Starting 4 threads 00:14:40.655 00:14:40.655 job0: (groupid=0, jobs=1): err= 0: pid=2690125: Fri Dec 6 14:08:29 2024 00:14:40.655 read: IOPS=8166, BW=31.9MiB/s (33.5MB/s)(32.1MiB/1007msec) 00:14:40.655 slat (nsec): min=943, max=7084.9k, avg=57050.94, stdev=426640.52 00:14:40.655 clat (usec): min=3315, max=19126, avg=7899.11, stdev=2100.92 00:14:40.655 lat (usec): min=3360, max=23509, avg=7956.16, stdev=2124.66 00:14:40.655 clat percentiles (usec): 00:14:40.655 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6390], 00:14:40.655 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7439], 60.00th=[ 7898], 00:14:40.655 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10814], 95.00th=[11994], 00:14:40.655 | 99.00th=[15008], 99.50th=[16712], 99.90th=[19006], 99.95th=[19006], 00:14:40.655 | 99.99th=[19006] 00:14:40.655 write: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec); 0 zone resets 00:14:40.655 slat (nsec): min=1668, max=16602k, avg=53300.84, stdev=396607.05 00:14:40.655 clat (usec): min=786, max=36987, avg=7187.56, stdev=3736.98 00:14:40.655 lat (usec): min=793, max=36998, avg=7240.87, stdev=3765.31 00:14:40.655 clat percentiles (usec): 00:14:40.655 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5407], 00:14:40.655 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7111], 00:14:40.655 | 70.00th=[ 7439], 80.00th=[ 8160], 90.00th=[ 9241], 95.00th=[ 9765], 00:14:40.655 | 99.00th=[32375], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:14:40.655 | 99.99th=[36963] 00:14:40.655 bw ( KiB/s): min=32760, max=36104, per=34.63%, avg=34432.00, stdev=2364.57, samples=2 00:14:40.655 iops : min= 8190, max= 9026, avg=8608.00, stdev=591.14, samples=2 00:14:40.655 lat (usec) : 1000=0.04% 00:14:40.655 lat (msec) : 2=0.06%, 4=2.55%, 10=88.84%, 20=7.56%, 50=0.96% 00:14:40.655 cpu : usr=6.36%, sys=9.05%, ctx=626, majf=0, minf=1 00:14:40.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:40.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.655 issued rwts: total=8224,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.655 job1: (groupid=0, jobs=1): err= 0: pid=2690129: Fri Dec 6 14:08:29 2024 00:14:40.655 read: IOPS=5660, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1005msec) 00:14:40.655 slat (nsec): min=949, max=9441.4k, avg=79192.71, stdev=519279.54 00:14:40.655 clat (usec): min=2541, max=20821, avg=10372.75, stdev=2778.51 00:14:40.655 lat (usec): min=3669, max=20846, avg=10451.94, stdev=2823.77 00:14:40.655 clat percentiles (usec): 00:14:40.655 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 8094], 00:14:40.655 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10945], 00:14:40.655 | 70.00th=[11338], 80.00th=[12911], 90.00th=[14877], 95.00th=[15401], 00:14:40.656 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19530], 99.95th=[19792], 00:14:40.656 | 99.99th=[20841] 00:14:40.656 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:14:40.656 slat (nsec): min=1658, max=13858k, avg=78536.60, stdev=481776.84 00:14:40.656 clat (usec): min=1673, max=25767, avg=11125.43, stdev=3806.59 00:14:40.656 lat (usec): min=1681, max=26055, avg=11203.97, stdev=3836.61 00:14:40.656 clat percentiles (usec): 00:14:40.656 | 1.00th=[ 2057], 5.00th=[ 5866], 10.00th=[ 7177], 20.00th=[ 8094], 00:14:40.656 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[11731], 00:14:40.656 | 70.00th=[13173], 80.00th=[14615], 90.00th=[16581], 95.00th=[17695], 00:14:40.656 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:14:40.656 | 99.99th=[25822] 00:14:40.656 bw ( KiB/s): min=22800, max=25784, per=24.43%, avg=24292.00, stdev=2110.01, samples=2 00:14:40.656 iops : min= 5700, max= 6446, avg=6073.00, stdev=527.50, samples=2 00:14:40.656 lat (msec) : 2=0.44%, 4=0.70%, 10=46.67%, 20=51.22%, 50=0.97% 00:14:40.656 cpu : usr=4.28%, sys=5.88%, ctx=567, majf=0, minf=1 00:14:40.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:40.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.656 issued rwts: total=5689,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.656 job2: (groupid=0, jobs=1): err= 0: pid=2690135: Fri Dec 6 14:08:29 2024 00:14:40.656 read: IOPS=6169, BW=24.1MiB/s (25.3MB/s)(25.2MiB/1047msec) 00:14:40.656 slat (nsec): min=938, max=17596k, avg=80822.24, stdev=624287.21 00:14:40.656 clat (usec): min=4889, max=82935, avg=11044.01, stdev=9354.47 00:14:40.656 lat (usec): min=4894, max=86415, avg=11124.83, stdev=9393.61 00:14:40.656 clat percentiles (usec): 00:14:40.656 | 1.00th=[ 6128], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8455], 00:14:40.656 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:14:40.656 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12387], 95.00th=[18220], 00:14:40.656 | 99.00th=[66323], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:14:40.656 | 99.99th=[83362] 00:14:40.656 write: IOPS=6357, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1047msec); 0 zone resets 00:14:40.656 slat (nsec): min=1605, max=10448k, avg=65906.57, stdev=372952.93 00:14:40.656 clat (usec): min=1219, max=28492, avg=9233.49, stdev=2493.52 00:14:40.656 lat (usec): min=1229, max=28500, avg=9299.40, stdev=2523.37 00:14:40.656 clat percentiles (usec): 00:14:40.656 | 1.00th=[ 5145], 5.00th=[ 6521], 10.00th=[ 7701], 20.00th=[ 8029], 00:14:40.656 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8979], 00:14:40.656 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[11863], 95.00th=[12649], 00:14:40.656 | 99.00th=[19792], 99.50th=[22414], 99.90th=[23987], 99.95th=[23987], 00:14:40.656 | 99.99th=[28443] 00:14:40.656 bw ( KiB/s): min=24576, max=28672, per=26.78%, avg=26624.00, stdev=2896.31, samples=2 00:14:40.656 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:14:40.656 lat (msec) : 2=0.06%, 4=0.11%, 10=77.15%, 20=20.21%, 50=1.51% 00:14:40.656 lat (msec) : 100=0.96% 00:14:40.656 cpu : usr=4.02%, sys=6.21%, ctx=722, majf=0, minf=2 00:14:40.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:40.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.656 issued rwts: total=6459,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.656 job3: (groupid=0, jobs=1): err= 0: pid=2690142: Fri Dec 6 14:08:29 2024 00:14:40.656 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:14:40.656 slat (nsec): min=987, max=8895.3k, avg=112381.19, stdev=739140.85 00:14:40.656 clat (usec): min=2938, max=49610, avg=15230.86, stdev=7227.18 00:14:40.656 lat (usec): min=2945, max=49616, avg=15343.25, stdev=7259.67 00:14:40.656 clat percentiles (usec): 00:14:40.656 | 1.00th=[ 5407], 5.00th=[ 6915], 10.00th=[ 7832], 20.00th=[ 9110], 00:14:40.656 | 30.00th=[10421], 40.00th=[11338], 50.00th=[14222], 60.00th=[15795], 00:14:40.656 | 70.00th=[17171], 80.00th=[20841], 90.00th=[26346], 95.00th=[28967], 00:14:40.656 | 99.00th=[35914], 99.50th=[35914], 99.90th=[46924], 99.95th=[46924], 00:14:40.656 | 99.99th=[49546] 00:14:40.656 write: IOPS=4500, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1004msec); 0 zone resets 00:14:40.656 slat (nsec): min=1618, max=8601.5k, avg=102629.48, stdev=575480.94 00:14:40.656 clat (usec): min=599, max=42601, avg=14396.27, stdev=7717.99 00:14:40.656 lat (usec): min=607, max=42604, avg=14498.90, stdev=7753.31 00:14:40.656 clat percentiles (usec): 00:14:40.656 | 1.00th=[ 2089], 5.00th=[ 4621], 10.00th=[ 6194], 20.00th=[ 8225], 00:14:40.656 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[13698], 60.00th=[15533], 00:14:40.656 | 70.00th=[17171], 80.00th=[19530], 90.00th=[22676], 95.00th=[28967], 00:14:40.656 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:14:40.656 | 99.99th=[42730] 00:14:40.656 bw ( KiB/s): min=14656, max=20480, per=17.67%, avg=17568.00, stdev=4118.19, samples=2 00:14:40.656 iops : min= 3664, max= 5120, avg=4392.00, stdev=1029.55, samples=2 00:14:40.656 lat (usec) : 750=0.03% 00:14:40.656 lat (msec) : 2=0.46%, 4=1.71%, 10=31.31%, 20=45.85%, 50=20.64% 00:14:40.656 cpu : usr=3.39%, sys=4.99%, ctx=378, majf=0, minf=3 00:14:40.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:40.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.656 issued rwts: total=4096,4519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.656 00:14:40.656 Run status group 0 (all jobs): 00:14:40.656 READ: bw=91.3MiB/s (95.7MB/s), 15.9MiB/s-31.9MiB/s (16.7MB/s-33.5MB/s), io=95.6MiB (100MB), run=1004-1047msec 00:14:40.656 WRITE: bw=97.1MiB/s (102MB/s), 17.6MiB/s-33.8MiB/s (18.4MB/s-35.4MB/s), io=102MiB (107MB), run=1004-1047msec 00:14:40.656 00:14:40.656 Disk stats (read/write): 00:14:40.656 nvme0n1: ios=6708/7104, merge=0/0, ticks=49881/46435, in_queue=96316, util=96.29% 00:14:40.656 nvme0n2: ios=4658/5044, merge=0/0, ticks=27530/32256, in_queue=59786, util=96.12% 00:14:40.656 nvme0n3: ios=5371/5632, merge=0/0, ticks=29051/29560, in_queue=58611, util=88.24% 00:14:40.656 nvme0n4: ios=3605/3712, merge=0/0, ticks=27721/33488, in_queue=61209, util=90.45% 00:14:40.916 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:40.916 [global] 00:14:40.916 thread=1 00:14:40.916 invalidate=1 00:14:40.916 rw=randwrite 00:14:40.916 time_based=1 00:14:40.916 runtime=1 00:14:40.916 ioengine=libaio 00:14:40.916 direct=1 00:14:40.916 bs=4096 00:14:40.916 iodepth=128 00:14:40.916 norandommap=0 00:14:40.916 numjobs=1 00:14:40.916 00:14:40.916 verify_dump=1 00:14:40.916 verify_backlog=512 00:14:40.916 verify_state_save=0 00:14:40.916 do_verify=1 00:14:40.916 verify=crc32c-intel 00:14:40.916 [job0] 00:14:40.916 filename=/dev/nvme0n1 00:14:40.916 [job1] 00:14:40.916 filename=/dev/nvme0n2 00:14:40.916 [job2] 00:14:40.916 filename=/dev/nvme0n3 00:14:40.916 [job3] 00:14:40.916 filename=/dev/nvme0n4 00:14:40.916 Could not set queue depth (nvme0n1) 00:14:40.916 Could not set queue depth (nvme0n2) 00:14:40.916 Could not set queue depth (nvme0n3) 00:14:40.916 Could not set queue depth (nvme0n4) 00:14:41.175 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.175 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.175 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.175 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:41.175 fio-3.35 00:14:41.175 Starting 4 threads 00:14:42.556 00:14:42.556 job0: (groupid=0, jobs=1): err= 0: pid=2690651: Fri Dec 6 14:08:31 2024 00:14:42.556 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:14:42.556 slat (nsec): min=978, max=13154k, avg=92688.41, stdev=763740.75 00:14:42.556 clat (usec): min=4306, max=44959, avg=12873.72, stdev=6154.81 00:14:42.556 lat (usec): min=4312, max=44968, avg=12966.41, stdev=6218.59 00:14:42.556 clat percentiles (usec): 00:14:42.556 | 1.00th=[ 4490], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7832], 00:14:42.556 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[13304], 00:14:42.556 | 70.00th=[15664], 80.00th=[17171], 90.00th=[21365], 95.00th=[26346], 00:14:42.556 | 99.00th=[31327], 99.50th=[36963], 99.90th=[44827], 99.95th=[44827], 00:14:42.556 | 99.99th=[44827] 00:14:42.556 write: IOPS=4278, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1011msec); 0 zone resets 00:14:42.556 slat (nsec): min=1629, max=18447k, avg=120728.63, stdev=794837.30 00:14:42.556 clat (usec): min=1737, max=77615, avg=17403.74, stdev=16050.41 00:14:42.556 lat (usec): min=1803, max=77623, avg=17524.47, stdev=16160.64 00:14:42.556 clat percentiles (usec): 00:14:42.556 | 1.00th=[ 3261], 5.00th=[ 4424], 10.00th=[ 5211], 20.00th=[ 6783], 00:14:42.556 | 30.00th=[ 7504], 40.00th=[10159], 50.00th=[13566], 60.00th=[14484], 00:14:42.556 | 70.00th=[16909], 80.00th=[20055], 90.00th=[41157], 95.00th=[60031], 00:14:42.556 | 99.00th=[73925], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:14:42.556 | 99.99th=[78119] 00:14:42.556 bw ( KiB/s): min=10864, max=22728, per=18.37%, avg=16796.00, stdev=8389.11, samples=2 00:14:42.556 iops : min= 2716, max= 5682, avg=4199.00, stdev=2097.28, samples=2 00:14:42.556 lat (msec) : 2=0.05%, 4=1.33%, 10=43.46%, 20=36.84%, 50=14.49% 00:14:42.556 lat (msec) : 100=3.84% 00:14:42.556 cpu : usr=3.17%, sys=4.75%, ctx=364, majf=0, minf=1 00:14:42.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:42.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.556 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.556 job1: (groupid=0, jobs=1): err= 0: pid=2690663: Fri Dec 6 14:08:31 2024 00:14:42.556 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:14:42.556 slat (nsec): min=1000, max=10740k, avg=71883.14, stdev=575200.24 00:14:42.556 clat (usec): min=1771, max=26358, avg=9408.79, stdev=3861.42 00:14:42.556 lat (usec): min=1807, max=26386, avg=9480.67, stdev=3911.38 00:14:42.556 clat percentiles (usec): 00:14:42.556 | 1.00th=[ 2704], 5.00th=[ 5407], 10.00th=[ 6128], 20.00th=[ 6652], 00:14:42.556 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7898], 60.00th=[ 9241], 00:14:42.556 | 70.00th=[10683], 80.00th=[12256], 90.00th=[15795], 95.00th=[17433], 00:14:42.556 | 99.00th=[20055], 99.50th=[21103], 99.90th=[23462], 99.95th=[23987], 00:14:42.556 | 99.99th=[26346] 00:14:42.556 write: IOPS=6375, BW=24.9MiB/s (26.1MB/s)(25.2MiB/1011msec); 0 zone resets 00:14:42.556 slat (nsec): min=1597, max=16744k, avg=78654.50, stdev=583039.94 00:14:42.556 clat (usec): min=567, max=68272, avg=10877.94, stdev=11201.95 00:14:42.556 lat (usec): min=577, max=68293, avg=10956.59, stdev=11283.51 00:14:42.556 clat percentiles (usec): 00:14:42.556 | 1.00th=[ 2376], 5.00th=[ 3556], 10.00th=[ 4228], 20.00th=[ 6128], 00:14:42.556 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:14:42.556 | 70.00th=[ 8848], 80.00th=[13960], 90.00th=[19268], 95.00th=[27132], 00:14:42.556 | 99.00th=[64750], 99.50th=[65274], 99.90th=[68682], 99.95th=[68682], 00:14:42.556 | 99.99th=[68682] 00:14:42.556 bw ( KiB/s): min=20480, max=30072, per=27.64%, avg=25276.00, stdev=6782.57, samples=2 00:14:42.556 iops : min= 5120, max= 7518, avg=6319.00, stdev=1695.64, samples=2 00:14:42.556 lat (usec) : 750=0.05%, 1000=0.13% 00:14:42.556 lat (msec) : 2=0.15%, 4=4.81%, 10=64.88%, 20=24.82%, 50=3.65% 00:14:42.556 lat (msec) : 100=1.52% 00:14:42.557 cpu : usr=4.55%, sys=6.93%, ctx=533, majf=0, minf=2 00:14:42.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:42.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.557 issued rwts: total=6144,6446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.557 job2: (groupid=0, jobs=1): err= 0: pid=2690699: Fri Dec 6 14:08:31 2024 00:14:42.557 read: IOPS=7470, BW=29.2MiB/s (30.6MB/s)(30.5MiB/1045msec) 00:14:42.557 slat (nsec): min=996, max=8635.9k, avg=62979.68, stdev=491132.74 00:14:42.557 clat (usec): min=1739, max=53982, avg=9237.62, stdev=5832.87 00:14:42.557 lat (usec): min=1744, max=53990, avg=9300.60, stdev=5846.63 00:14:42.557 clat percentiles (usec): 00:14:42.557 | 1.00th=[ 4146], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6652], 00:14:42.557 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:14:42.557 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11863], 95.00th=[13829], 00:14:42.557 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:14:42.557 | 99.99th=[53740] 00:14:42.557 write: IOPS=7839, BW=30.6MiB/s (32.1MB/s)(32.0MiB/1045msec); 0 zone resets 00:14:42.557 slat (nsec): min=1606, max=8204.5k, avg=57155.73, stdev=431535.50 00:14:42.557 clat (usec): min=1167, max=17620, avg=7377.89, stdev=2210.05 00:14:42.557 lat (usec): min=1177, max=17623, avg=7435.05, stdev=2231.28 00:14:42.557 clat percentiles (usec): 00:14:42.557 | 1.00th=[ 3556], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5538], 00:14:42.557 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6915], 60.00th=[ 8094], 00:14:42.557 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[11338], 00:14:42.557 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14353], 99.95th=[17171], 00:14:42.557 | 99.99th=[17695] 00:14:42.557 bw ( KiB/s): min=28664, max=36790, per=35.79%, avg=32727.00, stdev=5745.95, samples=2 00:14:42.557 iops : min= 7166, max= 9197, avg=8181.50, stdev=1436.13, samples=2 00:14:42.557 lat (msec) : 2=0.21%, 4=1.83%, 10=82.35%, 20=14.83%, 50=0.29% 00:14:42.557 lat (msec) : 100=0.51% 00:14:42.557 cpu : usr=5.27%, sys=7.95%, ctx=477, majf=0, minf=1 00:14:42.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:42.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.557 issued rwts: total=7807,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.557 job3: (groupid=0, jobs=1): err= 0: pid=2690713: Fri Dec 6 14:08:31 2024 00:14:42.557 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:14:42.557 slat (nsec): min=1014, max=18093k, avg=92253.26, stdev=694583.51 00:14:42.557 clat (usec): min=1985, max=38321, avg=11098.57, stdev=4799.89 00:14:42.557 lat (usec): min=1997, max=38332, avg=11190.82, stdev=4853.76 00:14:42.557 clat percentiles (usec): 00:14:42.557 | 1.00th=[ 4555], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 7963], 00:14:42.557 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:14:42.557 | 70.00th=[11994], 80.00th=[14484], 90.00th=[17171], 95.00th=[22414], 00:14:42.557 | 99.00th=[29230], 99.50th=[30540], 99.90th=[35914], 99.95th=[35914], 00:14:42.557 | 99.99th=[38536] 00:14:42.557 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1007msec); 0 zone resets 00:14:42.557 slat (nsec): min=1670, max=11012k, avg=111039.55, stdev=643595.71 00:14:42.557 clat (usec): min=1177, max=68336, avg=15506.50, stdev=12881.28 00:14:42.557 lat (usec): min=1186, max=68348, avg=15617.54, stdev=12954.04 00:14:42.557 clat percentiles (usec): 00:14:42.557 | 1.00th=[ 3130], 5.00th=[ 4883], 10.00th=[ 5932], 20.00th=[ 6980], 00:14:42.557 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[10945], 60.00th=[14484], 00:14:42.557 | 70.00th=[15533], 80.00th=[21890], 90.00th=[32637], 95.00th=[42206], 00:14:42.557 | 99.00th=[66323], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:14:42.557 | 99.99th=[68682] 00:14:42.557 bw ( KiB/s): min=17904, max=20439, per=20.97%, avg=19171.50, stdev=1792.52, samples=2 00:14:42.557 iops : min= 4476, max= 5109, avg=4792.50, stdev=447.60, samples=2 00:14:42.557 lat (msec) : 2=0.13%, 4=1.00%, 10=53.59%, 20=30.60%, 50=12.70% 00:14:42.557 lat (msec) : 100=1.98% 00:14:42.557 cpu : usr=3.08%, sys=5.96%, ctx=434, majf=0, minf=1 00:14:42.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:42.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.557 issued rwts: total=4608,4925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.557 00:14:42.557 Run status group 0 (all jobs): 00:14:42.557 READ: bw=84.7MiB/s (88.8MB/s), 15.8MiB/s-29.2MiB/s (16.6MB/s-30.6MB/s), io=88.5MiB (92.8MB), run=1007-1045msec 00:14:42.557 WRITE: bw=89.3MiB/s (93.6MB/s), 16.7MiB/s-30.6MiB/s (17.5MB/s-32.1MB/s), io=93.3MiB (97.8MB), run=1007-1045msec 00:14:42.557 00:14:42.557 Disk stats (read/write): 00:14:42.557 nvme0n1: ios=2586/2991, merge=0/0, ticks=34245/58747, in_queue=92992, util=89.98% 00:14:42.557 nvme0n2: ios=4651/4840, merge=0/0, ticks=41134/52401, in_queue=93535, util=94.91% 00:14:42.557 nvme0n3: ios=7863/8192, merge=0/0, ticks=63313/57601, in_queue=120914, util=93.98% 00:14:42.557 nvme0n4: ios=3474/3584, merge=0/0, ticks=38445/56338, in_queue=94783, util=95.99% 00:14:42.557 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:42.557 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2690975 00:14:42.557 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:42.557 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:42.557 [global] 00:14:42.557 thread=1 00:14:42.557 invalidate=1 00:14:42.557 rw=read 00:14:42.557 time_based=1 00:14:42.557 runtime=10 00:14:42.557 ioengine=libaio 00:14:42.557 direct=1 00:14:42.557 bs=4096 00:14:42.557 iodepth=1 00:14:42.557 norandommap=1 00:14:42.557 numjobs=1 00:14:42.557 00:14:42.557 [job0] 00:14:42.557 filename=/dev/nvme0n1 00:14:42.557 [job1] 00:14:42.557 filename=/dev/nvme0n2 00:14:42.557 [job2] 00:14:42.557 filename=/dev/nvme0n3 00:14:42.557 [job3] 00:14:42.557 filename=/dev/nvme0n4 00:14:42.557 Could not set queue depth (nvme0n1) 00:14:42.557 Could not set queue depth (nvme0n2) 00:14:42.557 Could not set queue depth (nvme0n3) 00:14:42.557 Could not set queue depth (nvme0n4) 00:14:43.125 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.125 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.125 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.125 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:43.125 fio-3.35 00:14:43.125 Starting 4 threads 00:14:45.670 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:45.670 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:14:45.670 fio: pid=2691202, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.670 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:45.930 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:14:45.930 fio: pid=2691195, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.930 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.930 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:46.190 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.190 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:46.191 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6045696, buflen=4096 00:14:46.191 fio: pid=2691186, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.191 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14704640, buflen=4096 00:14:46.191 fio: pid=2691190, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.191 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.191 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:46.191 00:14:46.191 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2691186: Fri Dec 6 14:08:34 2024 00:14:46.191 read: IOPS=494, BW=1975KiB/s (2023kB/s)(5904KiB/2989msec) 00:14:46.191 slat (usec): min=6, max=5606, avg=29.31, stdev=145.41 00:14:46.191 clat (usec): min=605, max=42061, avg=1988.91, stdev=6377.39 00:14:46.191 lat (usec): min=630, max=46925, avg=2018.22, stdev=6403.34 00:14:46.191 clat percentiles (usec): 00:14:46.191 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 889], 00:14:46.191 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 996], 00:14:46.191 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1106], 00:14:46.191 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:46.191 | 99.99th=[42206] 00:14:46.191 bw ( KiB/s): min= 96, max= 4048, per=35.33%, avg=2342.40, stdev=1769.81, samples=5 00:14:46.191 iops : min= 24, max= 1012, avg=585.60, stdev=442.45, samples=5 00:14:46.191 lat (usec) : 750=1.69%, 1000=60.19% 00:14:46.191 lat (msec) : 2=35.48%, 20=0.07%, 50=2.51% 00:14:46.191 cpu : usr=0.27%, sys=1.74%, ctx=1480, majf=0, minf=1 00:14:46.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 issued rwts: total=1477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.191 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2691190: Fri Dec 6 14:08:34 2024 00:14:46.191 read: IOPS=1145, BW=4581KiB/s (4690kB/s)(14.0MiB/3135msec) 00:14:46.191 slat (usec): min=6, max=11224, avg=35.35, stdev=342.80 00:14:46.191 clat (usec): min=291, max=41859, avg=831.54, stdev=1516.40 00:14:46.191 lat (usec): min=299, max=41886, avg=866.89, stdev=1553.99 00:14:46.191 clat percentiles (usec): 00:14:46.191 | 1.00th=[ 553], 5.00th=[ 644], 10.00th=[ 668], 20.00th=[ 717], 00:14:46.191 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:14:46.191 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 906], 95.00th=[ 947], 00:14:46.191 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[41157], 99.95th=[41681], 00:14:46.191 | 99.99th=[41681] 00:14:46.191 bw ( KiB/s): min= 3016, max= 5144, per=69.35%, avg=4598.00, stdev=817.76, samples=6 00:14:46.191 iops : min= 754, max= 1286, avg=1149.50, stdev=204.44, samples=6 00:14:46.191 lat (usec) : 500=0.33%, 750=33.47%, 1000=65.14% 00:14:46.191 lat (msec) : 2=0.89%, 50=0.14% 00:14:46.191 cpu : usr=1.21%, sys=3.13%, ctx=3598, majf=0, minf=2 00:14:46.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 issued rwts: total=3591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.191 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2691195: Fri Dec 6 14:08:34 2024 00:14:46.191 read: IOPS=24, BW=96.9KiB/s (99.3kB/s)(268KiB/2765msec) 00:14:46.191 slat (usec): min=26, max=220, avg=29.87, stdev=23.49 00:14:46.191 clat (usec): min=987, max=42100, avg=41185.92, stdev=4998.40 00:14:46.191 lat (usec): min=1028, max=42127, avg=41215.83, stdev=4997.07 00:14:46.191 clat percentiles (usec): 00:14:46.191 | 1.00th=[ 988], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:14:46.191 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:14:46.191 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:46.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:46.191 | 99.99th=[42206] 00:14:46.191 bw ( KiB/s): min= 96, max= 96, per=1.45%, avg=96.00, stdev= 0.00, samples=5 00:14:46.191 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:14:46.191 lat (usec) : 1000=1.47% 00:14:46.191 lat (msec) : 50=97.06% 00:14:46.191 cpu : usr=0.11%, sys=0.00%, ctx=73, majf=0, minf=2 00:14:46.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.191 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2691202: Fri Dec 6 14:08:34 2024 00:14:46.191 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(252KiB/2601msec) 00:14:46.191 slat (nsec): min=25134, max=38866, avg=25778.16, stdev=1681.86 00:14:46.191 clat (usec): min=1024, max=42066, avg=41237.21, stdev=5155.18 00:14:46.191 lat (usec): min=1063, max=42092, avg=41262.99, stdev=5153.51 00:14:46.191 clat percentiles (usec): 00:14:46.191 | 1.00th=[ 1029], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:14:46.191 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:46.191 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:46.191 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:46.191 | 99.99th=[42206] 00:14:46.191 bw ( KiB/s): min= 88, max= 104, per=1.45%, avg=96.00, stdev= 5.66, samples=5 00:14:46.191 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:14:46.191 lat (msec) : 2=1.56%, 50=96.88% 00:14:46.191 cpu : usr=0.12%, sys=0.00%, ctx=64, majf=0, minf=2 00:14:46.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.191 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.191 00:14:46.191 Run status group 0 (all jobs): 00:14:46.191 READ: bw=6630KiB/s (6789kB/s), 96.9KiB/s-4581KiB/s (99.2kB/s-4690kB/s), io=20.3MiB (21.3MB), run=2601-3135msec 00:14:46.191 00:14:46.191 Disk stats (read/write): 00:14:46.191 nvme0n1: ios=1472/0, merge=0/0, ticks=2758/0, in_queue=2758, util=94.56% 00:14:46.191 nvme0n2: ios=3570/0, merge=0/0, ticks=3652/0, in_queue=3652, util=98.95% 00:14:46.191 nvme0n3: ios=92/0, merge=0/0, ticks=3068/0, in_queue=3068, util=99.00% 00:14:46.191 nvme0n4: ios=62/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.42% 00:14:46.451 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.451 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:46.711 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.711 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:46.971 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.971 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:46.971 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.971 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2690975 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:47.232 nvmf hotplug test: fio failed as expected 00:14:47.232 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.493 rmmod nvme_tcp 00:14:47.493 rmmod nvme_fabrics 00:14:47.493 rmmod nvme_keyring 00:14:47.493 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2687452 ']' 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2687452 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2687452 ']' 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2687452 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.494 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2687452 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2687452' 00:14:47.754 killing process with pid 2687452 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2687452 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2687452 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.754 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.298 00:14:50.298 real 0m29.391s 00:14:50.298 user 2m44.531s 00:14:50.298 sys 0m9.613s 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.298 ************************************ 00:14:50.298 END TEST nvmf_fio_target 00:14:50.298 ************************************ 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:50.298 ************************************ 00:14:50.298 START TEST nvmf_bdevio 00:14:50.298 ************************************ 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.298 * Looking for test storage... 00:14:50.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:50.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.298 --rc genhtml_branch_coverage=1 00:14:50.298 --rc genhtml_function_coverage=1 00:14:50.298 --rc genhtml_legend=1 00:14:50.298 --rc geninfo_all_blocks=1 00:14:50.298 --rc geninfo_unexecuted_blocks=1 00:14:50.298 00:14:50.298 ' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:50.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.298 --rc genhtml_branch_coverage=1 00:14:50.298 --rc genhtml_function_coverage=1 00:14:50.298 --rc genhtml_legend=1 00:14:50.298 --rc geninfo_all_blocks=1 00:14:50.298 --rc geninfo_unexecuted_blocks=1 00:14:50.298 00:14:50.298 ' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:50.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.298 --rc genhtml_branch_coverage=1 00:14:50.298 --rc genhtml_function_coverage=1 00:14:50.298 --rc genhtml_legend=1 00:14:50.298 --rc geninfo_all_blocks=1 00:14:50.298 --rc geninfo_unexecuted_blocks=1 00:14:50.298 00:14:50.298 ' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:50.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.298 --rc genhtml_branch_coverage=1 00:14:50.298 --rc genhtml_function_coverage=1 00:14:50.298 --rc genhtml_legend=1 00:14:50.298 --rc geninfo_all_blocks=1 00:14:50.298 --rc geninfo_unexecuted_blocks=1 00:14:50.298 00:14:50.298 ' 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.298 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.299 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:58.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:58.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:58.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:58.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.433 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:14:58.433 00:14:58.433 --- 10.0.0.2 ping statistics --- 00:14:58.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.433 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:14:58.433 00:14:58.433 --- 10.0.0.1 ping statistics --- 00:14:58.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.433 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:58.433 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2696486 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2696486 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2696486 ']' 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.434 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.434 [2024-12-06 14:08:46.188309] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:58.434 [2024-12-06 14:08:46.188373] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.434 [2024-12-06 14:08:46.288025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.434 [2024-12-06 14:08:46.340306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.434 [2024-12-06 14:08:46.340355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.434 [2024-12-06 14:08:46.340363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.434 [2024-12-06 14:08:46.340370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.434 [2024-12-06 14:08:46.340376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.434 [2024-12-06 14:08:46.342449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:58.434 [2024-12-06 14:08:46.342611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:58.434 [2024-12-06 14:08:46.342871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:58.434 [2024-12-06 14:08:46.342874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.434 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.434 [2024-12-06 14:08:47.064130] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.695 Malloc0 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:58.695 [2024-12-06 14:08:47.139090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:58.695 { 00:14:58.695 "params": { 00:14:58.695 "name": "Nvme$subsystem", 00:14:58.695 "trtype": "$TEST_TRANSPORT", 00:14:58.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.695 "adrfam": "ipv4", 00:14:58.695 "trsvcid": "$NVMF_PORT", 00:14:58.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.695 "hdgst": ${hdgst:-false}, 00:14:58.695 "ddgst": ${ddgst:-false} 00:14:58.695 }, 00:14:58.695 "method": "bdev_nvme_attach_controller" 00:14:58.695 } 00:14:58.695 EOF 00:14:58.695 )") 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:58.695 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:58.695 "params": { 00:14:58.695 "name": "Nvme1", 00:14:58.695 "trtype": "tcp", 00:14:58.695 "traddr": "10.0.0.2", 00:14:58.695 "adrfam": "ipv4", 00:14:58.695 "trsvcid": "4420", 00:14:58.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.695 "hdgst": false, 00:14:58.695 "ddgst": false 00:14:58.695 }, 00:14:58.695 "method": "bdev_nvme_attach_controller" 00:14:58.695 }' 00:14:58.695 [2024-12-06 14:08:47.206234] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:14:58.696 [2024-12-06 14:08:47.206319] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696558 ] 00:14:58.696 [2024-12-06 14:08:47.305167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.956 [2024-12-06 14:08:47.362562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.956 [2024-12-06 14:08:47.362765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.956 [2024-12-06 14:08:47.362767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.956 I/O targets: 00:14:58.956 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:58.956 00:14:58.956 00:14:58.956 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.956 http://cunit.sourceforge.net/ 00:14:58.956 00:14:58.956 00:14:58.956 Suite: bdevio tests on: Nvme1n1 00:14:59.216 Test: blockdev write read block ...passed 00:14:59.216 Test: blockdev write zeroes read block ...passed 00:14:59.216 Test: blockdev write zeroes read no split ...passed 00:14:59.216 Test: blockdev write zeroes read split ...passed 00:14:59.216 Test: blockdev write zeroes read split partial ...passed 00:14:59.216 Test: blockdev reset ...[2024-12-06 14:08:47.695014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:59.216 [2024-12-06 14:08:47.695105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c5580 (9): Bad file descriptor 00:14:59.216 [2024-12-06 14:08:47.789559] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:59.216 passed 00:14:59.216 Test: blockdev write read 8 blocks ...passed 00:14:59.216 Test: blockdev write read size > 128k ...passed 00:14:59.216 Test: blockdev write read invalid size ...passed 00:14:59.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.476 Test: blockdev write read max offset ...passed 00:14:59.476 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.476 Test: blockdev writev readv 8 blocks ...passed 00:14:59.476 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.476 Test: blockdev writev readv block ...passed 00:14:59.476 Test: blockdev writev readv size > 128k ...passed 00:14:59.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.476 Test: blockdev comparev and writev ...[2024-12-06 14:08:48.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.014426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.014443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.014452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.015025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.015039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.015054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.015070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.015597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.015610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.015624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.015635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.016142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.016155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.016171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:59.477 [2024-12-06 14:08:48.016179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:59.477 passed 00:14:59.477 Test: blockdev nvme passthru rw ...passed 00:14:59.477 Test: blockdev nvme passthru vendor specific ...[2024-12-06 14:08:48.101343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.477 [2024-12-06 14:08:48.101358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.101748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.477 [2024-12-06 14:08:48.101760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.102147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.477 [2024-12-06 14:08:48.102158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:59.477 [2024-12-06 14:08:48.102538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:59.477 [2024-12-06 14:08:48.102549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:59.477 passed 00:14:59.737 Test: blockdev nvme admin passthru ...passed 00:14:59.737 Test: blockdev copy ...passed 00:14:59.737 00:14:59.737 Run Summary: Type Total Ran Passed Failed Inactive 00:14:59.737 suites 1 1 n/a 0 0 00:14:59.737 tests 23 23 23 0 0 00:14:59.737 asserts 152 152 152 0 n/a 00:14:59.737 00:14:59.737 Elapsed time = 1.178 seconds 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:59.737 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.738 rmmod nvme_tcp 00:14:59.738 rmmod nvme_fabrics 00:14:59.738 rmmod nvme_keyring 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2696486 ']' 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2696486 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2696486 ']' 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2696486 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.738 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2696486 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2696486' 00:14:59.998 killing process with pid 2696486 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2696486 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2696486 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.998 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:02.540 00:15:02.540 real 0m12.267s 00:15:02.540 user 0m13.453s 00:15:02.540 sys 0m6.213s 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:02.540 ************************************ 00:15:02.540 END TEST nvmf_bdevio 00:15:02.540 ************************************ 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:02.540 00:15:02.540 real 5m5.083s 00:15:02.540 user 12m2.262s 00:15:02.540 sys 1m52.560s 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:02.540 ************************************ 00:15:02.540 END TEST nvmf_target_core 00:15:02.540 ************************************ 00:15:02.540 14:08:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:02.540 14:08:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.540 14:08:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.540 14:08:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.540 ************************************ 00:15:02.540 START TEST nvmf_target_extra 00:15:02.540 ************************************ 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:02.540 * Looking for test storage... 00:15:02.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:15:02.540 14:08:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:02.540 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:02.540 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.541 --rc genhtml_branch_coverage=1 00:15:02.541 --rc genhtml_function_coverage=1 00:15:02.541 --rc genhtml_legend=1 00:15:02.541 --rc geninfo_all_blocks=1 00:15:02.541 --rc geninfo_unexecuted_blocks=1 00:15:02.541 00:15:02.541 ' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.541 --rc genhtml_branch_coverage=1 00:15:02.541 --rc genhtml_function_coverage=1 00:15:02.541 --rc genhtml_legend=1 00:15:02.541 --rc geninfo_all_blocks=1 00:15:02.541 --rc geninfo_unexecuted_blocks=1 00:15:02.541 00:15:02.541 ' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.541 --rc genhtml_branch_coverage=1 00:15:02.541 --rc genhtml_function_coverage=1 00:15:02.541 --rc genhtml_legend=1 00:15:02.541 --rc geninfo_all_blocks=1 00:15:02.541 --rc geninfo_unexecuted_blocks=1 00:15:02.541 00:15:02.541 ' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:02.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.541 --rc genhtml_branch_coverage=1 00:15:02.541 --rc genhtml_function_coverage=1 00:15:02.541 --rc genhtml_legend=1 00:15:02.541 --rc geninfo_all_blocks=1 00:15:02.541 --rc geninfo_unexecuted_blocks=1 00:15:02.541 00:15:02.541 ' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.541 ************************************ 00:15:02.541 START TEST nvmf_example 00:15:02.541 ************************************ 00:15:02.541 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:02.802 * Looking for test storage... 00:15:02.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.802 --rc genhtml_branch_coverage=1 00:15:02.802 --rc genhtml_function_coverage=1 00:15:02.802 --rc genhtml_legend=1 00:15:02.802 --rc geninfo_all_blocks=1 00:15:02.802 --rc geninfo_unexecuted_blocks=1 00:15:02.802 00:15:02.802 ' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.802 --rc genhtml_branch_coverage=1 00:15:02.802 --rc genhtml_function_coverage=1 00:15:02.802 --rc genhtml_legend=1 00:15:02.802 --rc geninfo_all_blocks=1 00:15:02.802 --rc geninfo_unexecuted_blocks=1 00:15:02.802 00:15:02.802 ' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.802 --rc genhtml_branch_coverage=1 00:15:02.802 --rc genhtml_function_coverage=1 00:15:02.802 --rc genhtml_legend=1 00:15:02.802 --rc geninfo_all_blocks=1 00:15:02.802 --rc geninfo_unexecuted_blocks=1 00:15:02.802 00:15:02.802 ' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.802 --rc genhtml_branch_coverage=1 00:15:02.802 --rc genhtml_function_coverage=1 00:15:02.802 --rc genhtml_legend=1 00:15:02.802 --rc geninfo_all_blocks=1 00:15:02.802 --rc geninfo_unexecuted_blocks=1 00:15:02.802 00:15:02.802 ' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.802 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.803 14:08:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.939 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:10.940 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:10.940 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:10.940 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:10.940 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:15:10.940 00:15:10.940 --- 10.0.0.2 ping statistics --- 00:15:10.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.940 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:15:10.940 00:15:10.940 --- 10.0.0.1 ping statistics --- 00:15:10.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.940 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2701274 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2701274 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2701274 ']' 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.940 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.941 14:08:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.201 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.201 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:11.201 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:11.202 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.202 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:11.462 14:08:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:23.686 Initializing NVMe Controllers 00:15:23.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:23.686 Initialization complete. Launching workers. 00:15:23.686 ======================================================== 00:15:23.686 Latency(us) 00:15:23.686 Device Information : IOPS MiB/s Average min max 00:15:23.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19023.84 74.31 3364.04 641.70 16017.23 00:15:23.686 ======================================================== 00:15:23.686 Total : 19023.84 74.31 3364.04 641.70 16017.23 00:15:23.686 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:23.686 rmmod nvme_tcp 00:15:23.686 rmmod nvme_fabrics 00:15:23.686 rmmod nvme_keyring 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2701274 ']' 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2701274 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2701274 ']' 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2701274 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701274 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:23.686 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701274' 00:15:23.687 killing process with pid 2701274 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2701274 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2701274 00:15:23.687 nvmf threads initialize successfully 00:15:23.687 bdev subsystem init successfully 00:15:23.687 created a nvmf target service 00:15:23.687 create targets's poll groups done 00:15:23.687 all subsystems of target started 00:15:23.687 nvmf target is running 00:15:23.687 all subsystems of target stopped 00:15:23.687 destroy targets's poll groups done 00:15:23.687 destroyed the nvmf target service 00:15:23.687 bdev subsystem finish successfully 00:15:23.687 nvmf threads destroy successfully 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.687 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:23.947 00:15:23.947 real 0m21.404s 00:15:23.947 user 0m46.382s 00:15:23.947 sys 0m7.057s 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:23.947 ************************************ 00:15:23.947 END TEST nvmf_example 00:15:23.947 ************************************ 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.947 ************************************ 00:15:23.947 START TEST nvmf_filesystem 00:15:23.947 ************************************ 00:15:23.947 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:24.209 * Looking for test storage... 00:15:24.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.209 --rc genhtml_branch_coverage=1 00:15:24.209 --rc genhtml_function_coverage=1 00:15:24.209 --rc genhtml_legend=1 00:15:24.209 --rc geninfo_all_blocks=1 00:15:24.209 --rc geninfo_unexecuted_blocks=1 00:15:24.209 00:15:24.209 ' 00:15:24.209 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.210 --rc genhtml_branch_coverage=1 00:15:24.210 --rc genhtml_function_coverage=1 00:15:24.210 --rc genhtml_legend=1 00:15:24.210 --rc geninfo_all_blocks=1 00:15:24.210 --rc geninfo_unexecuted_blocks=1 00:15:24.210 00:15:24.210 ' 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.210 --rc genhtml_branch_coverage=1 00:15:24.210 --rc genhtml_function_coverage=1 00:15:24.210 --rc genhtml_legend=1 00:15:24.210 --rc geninfo_all_blocks=1 00:15:24.210 --rc geninfo_unexecuted_blocks=1 00:15:24.210 00:15:24.210 ' 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.210 --rc genhtml_branch_coverage=1 00:15:24.210 --rc genhtml_function_coverage=1 00:15:24.210 --rc genhtml_legend=1 00:15:24.210 --rc geninfo_all_blocks=1 00:15:24.210 --rc geninfo_unexecuted_blocks=1 00:15:24.210 00:15:24.210 ' 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:24.210 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:24.211 #define SPDK_CONFIG_H 00:15:24.211 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:24.211 #define SPDK_CONFIG_APPS 1 00:15:24.211 #define SPDK_CONFIG_ARCH native 00:15:24.211 #undef SPDK_CONFIG_ASAN 00:15:24.211 #undef SPDK_CONFIG_AVAHI 00:15:24.211 #undef SPDK_CONFIG_CET 00:15:24.211 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:24.211 #define SPDK_CONFIG_COVERAGE 1 00:15:24.211 #define SPDK_CONFIG_CROSS_PREFIX 00:15:24.211 #undef SPDK_CONFIG_CRYPTO 00:15:24.211 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:24.211 #undef SPDK_CONFIG_CUSTOMOCF 00:15:24.211 #undef SPDK_CONFIG_DAOS 00:15:24.211 #define SPDK_CONFIG_DAOS_DIR 00:15:24.211 #define SPDK_CONFIG_DEBUG 1 00:15:24.211 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:24.211 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:24.211 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:24.211 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:24.211 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:24.211 #undef SPDK_CONFIG_DPDK_UADK 00:15:24.211 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:24.211 #define SPDK_CONFIG_EXAMPLES 1 00:15:24.211 #undef SPDK_CONFIG_FC 00:15:24.211 #define SPDK_CONFIG_FC_PATH 00:15:24.211 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:24.211 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:24.211 #define SPDK_CONFIG_FSDEV 1 00:15:24.211 #undef SPDK_CONFIG_FUSE 00:15:24.211 #undef SPDK_CONFIG_FUZZER 00:15:24.211 #define SPDK_CONFIG_FUZZER_LIB 00:15:24.211 #undef SPDK_CONFIG_GOLANG 00:15:24.211 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:24.211 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:24.211 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:24.211 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:24.211 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:24.211 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:24.211 #undef SPDK_CONFIG_HAVE_LZ4 00:15:24.211 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:24.211 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:24.211 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:24.211 #define SPDK_CONFIG_IDXD 1 00:15:24.211 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:24.211 #undef SPDK_CONFIG_IPSEC_MB 00:15:24.211 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:24.211 #define SPDK_CONFIG_ISAL 1 00:15:24.211 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:24.211 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:24.211 #define SPDK_CONFIG_LIBDIR 00:15:24.211 #undef SPDK_CONFIG_LTO 00:15:24.211 #define SPDK_CONFIG_MAX_LCORES 128 00:15:24.211 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:24.211 #define SPDK_CONFIG_NVME_CUSE 1 00:15:24.211 #undef SPDK_CONFIG_OCF 00:15:24.211 #define SPDK_CONFIG_OCF_PATH 00:15:24.211 #define SPDK_CONFIG_OPENSSL_PATH 00:15:24.211 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:24.211 #define SPDK_CONFIG_PGO_DIR 00:15:24.211 #undef SPDK_CONFIG_PGO_USE 00:15:24.211 #define SPDK_CONFIG_PREFIX /usr/local 00:15:24.211 #undef SPDK_CONFIG_RAID5F 00:15:24.211 #undef SPDK_CONFIG_RBD 00:15:24.211 #define SPDK_CONFIG_RDMA 1 00:15:24.211 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:24.211 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:24.211 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:24.211 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:24.211 #define SPDK_CONFIG_SHARED 1 00:15:24.211 #undef SPDK_CONFIG_SMA 00:15:24.211 #define SPDK_CONFIG_TESTS 1 00:15:24.211 #undef SPDK_CONFIG_TSAN 00:15:24.211 #define SPDK_CONFIG_UBLK 1 00:15:24.211 #define SPDK_CONFIG_UBSAN 1 00:15:24.211 #undef SPDK_CONFIG_UNIT_TESTS 00:15:24.211 #undef SPDK_CONFIG_URING 00:15:24.211 #define SPDK_CONFIG_URING_PATH 00:15:24.211 #undef SPDK_CONFIG_URING_ZNS 00:15:24.211 #undef SPDK_CONFIG_USDT 00:15:24.211 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:24.211 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:24.211 #define SPDK_CONFIG_VFIO_USER 1 00:15:24.211 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:24.211 #define SPDK_CONFIG_VHOST 1 00:15:24.211 #define SPDK_CONFIG_VIRTIO 1 00:15:24.211 #undef SPDK_CONFIG_VTUNE 00:15:24.211 #define SPDK_CONFIG_VTUNE_DIR 00:15:24.211 #define SPDK_CONFIG_WERROR 1 00:15:24.211 #define SPDK_CONFIG_WPDK_DIR 00:15:24.211 #undef SPDK_CONFIG_XNVME 00:15:24.211 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:24.211 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:24.474 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:24.475 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2704630 ]] 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2704630 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:24.476 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.psaMUr 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.psaMUr/tests/target /tmp/spdk.psaMUr 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122589843456 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356529664 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6766686208 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677519360 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=745472 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:24.477 * Looking for test storage... 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122589843456 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8981278720 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:24.477 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.477 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.478 --rc genhtml_branch_coverage=1 00:15:24.478 --rc genhtml_function_coverage=1 00:15:24.478 --rc genhtml_legend=1 00:15:24.478 --rc geninfo_all_blocks=1 00:15:24.478 --rc geninfo_unexecuted_blocks=1 00:15:24.478 00:15:24.478 ' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.478 --rc genhtml_branch_coverage=1 00:15:24.478 --rc genhtml_function_coverage=1 00:15:24.478 --rc genhtml_legend=1 00:15:24.478 --rc geninfo_all_blocks=1 00:15:24.478 --rc geninfo_unexecuted_blocks=1 00:15:24.478 00:15:24.478 ' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.478 --rc genhtml_branch_coverage=1 00:15:24.478 --rc genhtml_function_coverage=1 00:15:24.478 --rc genhtml_legend=1 00:15:24.478 --rc geninfo_all_blocks=1 00:15:24.478 --rc geninfo_unexecuted_blocks=1 00:15:24.478 00:15:24.478 ' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.478 --rc genhtml_branch_coverage=1 00:15:24.478 --rc genhtml_function_coverage=1 00:15:24.478 --rc genhtml_legend=1 00:15:24.478 --rc geninfo_all_blocks=1 00:15:24.478 --rc geninfo_unexecuted_blocks=1 00:15:24.478 00:15:24.478 ' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.478 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.479 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.479 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:24.479 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:24.479 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:24.479 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:32.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:32.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:32.644 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:32.645 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:32.645 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:32.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:15:32.645 00:15:32.645 --- 10.0.0.2 ping statistics --- 00:15:32.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.645 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:15:32.645 00:15:32.645 --- 10.0.0.1 ping statistics --- 00:15:32.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.645 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:32.645 ************************************ 00:15:32.645 START TEST nvmf_filesystem_no_in_capsule 00:15:32.645 ************************************ 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2708315 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2708315 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2708315 ']' 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.645 14:09:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.645 [2024-12-06 14:09:20.701247] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:32.645 [2024-12-06 14:09:20.701307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.645 [2024-12-06 14:09:20.802526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.645 [2024-12-06 14:09:20.855734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.645 [2024-12-06 14:09:20.855791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.645 [2024-12-06 14:09:20.855800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.645 [2024-12-06 14:09:20.855807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.645 [2024-12-06 14:09:20.855813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.645 [2024-12-06 14:09:20.857949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.645 [2024-12-06 14:09:20.858114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.645 [2024-12-06 14:09:20.858278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.645 [2024-12-06 14:09:20.858278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.907 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.907 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:32.907 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.907 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.907 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 [2024-12-06 14:09:21.576388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 [2024-12-06 14:09:21.735718] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.168 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:33.168 { 00:15:33.168 "name": "Malloc1", 00:15:33.168 "aliases": [ 00:15:33.168 "d95b82dc-3931-48b9-b370-c6c9fcb77b6b" 00:15:33.168 ], 00:15:33.168 "product_name": "Malloc disk", 00:15:33.168 "block_size": 512, 00:15:33.168 "num_blocks": 1048576, 00:15:33.168 "uuid": "d95b82dc-3931-48b9-b370-c6c9fcb77b6b", 00:15:33.168 "assigned_rate_limits": { 00:15:33.168 "rw_ios_per_sec": 0, 00:15:33.168 "rw_mbytes_per_sec": 0, 00:15:33.168 "r_mbytes_per_sec": 0, 00:15:33.168 "w_mbytes_per_sec": 0 00:15:33.168 }, 00:15:33.168 "claimed": true, 00:15:33.168 "claim_type": "exclusive_write", 00:15:33.168 "zoned": false, 00:15:33.168 "supported_io_types": { 00:15:33.168 "read": true, 00:15:33.168 "write": true, 00:15:33.168 "unmap": true, 00:15:33.168 "flush": true, 00:15:33.168 "reset": true, 00:15:33.168 "nvme_admin": false, 00:15:33.169 "nvme_io": false, 00:15:33.169 "nvme_io_md": false, 00:15:33.169 "write_zeroes": true, 00:15:33.169 "zcopy": true, 00:15:33.169 "get_zone_info": false, 00:15:33.169 "zone_management": false, 00:15:33.169 "zone_append": false, 00:15:33.169 "compare": false, 00:15:33.169 "compare_and_write": false, 00:15:33.169 "abort": true, 00:15:33.169 "seek_hole": false, 00:15:33.169 "seek_data": false, 00:15:33.169 "copy": true, 00:15:33.169 "nvme_iov_md": false 00:15:33.169 }, 00:15:33.169 "memory_domains": [ 00:15:33.169 { 00:15:33.169 "dma_device_id": "system", 00:15:33.169 "dma_device_type": 1 00:15:33.169 }, 00:15:33.169 { 00:15:33.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.169 "dma_device_type": 2 00:15:33.169 } 00:15:33.169 ], 00:15:33.169 "driver_specific": {} 00:15:33.169 } 00:15:33.169 ]' 00:15:33.169 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:33.430 14:09:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.815 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.815 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:34.815 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.815 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:34.815 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:37.356 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:37.926 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:38.868 ************************************ 00:15:38.868 START TEST filesystem_ext4 00:15:38.868 ************************************ 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:38.868 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:38.868 mke2fs 1.47.0 (5-Feb-2023) 00:15:38.868 Discarding device blocks: 0/522240 done 00:15:38.868 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:38.868 Filesystem UUID: 4502b7e3-0f5b-4de0-bd9d-35957bd4f4ef 00:15:38.868 Superblock backups stored on blocks: 00:15:38.868 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:38.868 00:15:38.868 Allocating group tables: 0/64 done 00:15:38.868 Writing inode tables: 0/64 done 00:15:38.868 Creating journal (8192 blocks): done 00:15:39.129 Writing superblocks and filesystem accounting information: 0/64 done 00:15:39.129 00:15:39.129 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:39.129 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:44.420 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2708315 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:44.420 00:15:44.420 real 0m5.720s 00:15:44.420 user 0m0.022s 00:15:44.420 sys 0m0.059s 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.420 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:44.420 ************************************ 00:15:44.420 END TEST filesystem_ext4 00:15:44.420 ************************************ 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.681 ************************************ 00:15:44.681 START TEST filesystem_btrfs 00:15:44.681 ************************************ 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:44.681 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:44.682 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:44.682 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:44.943 btrfs-progs v6.8.1 00:15:44.943 See https://btrfs.readthedocs.io for more information. 00:15:44.943 00:15:44.943 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:44.943 NOTE: several default settings have changed in version 5.15, please make sure 00:15:44.943 this does not affect your deployments: 00:15:44.943 - DUP for metadata (-m dup) 00:15:44.943 - enabled no-holes (-O no-holes) 00:15:44.943 - enabled free-space-tree (-R free-space-tree) 00:15:44.943 00:15:44.943 Label: (null) 00:15:44.943 UUID: a8cbd084-0baf-41da-b02a-5e3cce0c1353 00:15:44.943 Node size: 16384 00:15:44.943 Sector size: 4096 (CPU page size: 4096) 00:15:44.943 Filesystem size: 510.00MiB 00:15:44.943 Block group profiles: 00:15:44.943 Data: single 8.00MiB 00:15:44.943 Metadata: DUP 32.00MiB 00:15:44.943 System: DUP 8.00MiB 00:15:44.943 SSD detected: yes 00:15:44.943 Zoned device: no 00:15:44.943 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:44.943 Checksum: crc32c 00:15:44.943 Number of devices: 1 00:15:44.943 Devices: 00:15:44.943 ID SIZE PATH 00:15:44.943 1 510.00MiB /dev/nvme0n1p1 00:15:44.943 00:15:44.943 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:44.943 14:09:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2708315 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:45.884 00:15:45.884 real 0m1.238s 00:15:45.884 user 0m0.023s 00:15:45.884 sys 0m0.068s 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:45.884 ************************************ 00:15:45.884 END TEST filesystem_btrfs 00:15:45.884 ************************************ 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:45.884 ************************************ 00:15:45.884 START TEST filesystem_xfs 00:15:45.884 ************************************ 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:45.884 14:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:46.825 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:46.825 = sectsz=512 attr=2, projid32bit=1 00:15:46.825 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:46.825 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:46.825 data = bsize=4096 blocks=130560, imaxpct=25 00:15:46.825 = sunit=0 swidth=0 blks 00:15:46.825 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:46.825 log =internal log bsize=4096 blocks=16384, version=2 00:15:46.825 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:46.825 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:47.765 Discarding blocks...Done. 00:15:47.765 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:47.765 14:09:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2708315 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:49.693 00:15:49.693 real 0m3.882s 00:15:49.693 user 0m0.037s 00:15:49.693 sys 0m0.046s 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.693 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:49.693 ************************************ 00:15:49.693 END TEST filesystem_xfs 00:15:49.693 ************************************ 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:49.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2708315 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2708315 ']' 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2708315 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.951 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708315 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708315' 00:15:50.211 killing process with pid 2708315 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2708315 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2708315 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:50.211 00:15:50.211 real 0m18.198s 00:15:50.211 user 1m11.773s 00:15:50.211 sys 0m1.369s 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.211 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 ************************************ 00:15:50.211 END TEST nvmf_filesystem_no_in_capsule 00:15:50.211 ************************************ 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:50.471 ************************************ 00:15:50.471 START TEST nvmf_filesystem_in_capsule 00:15:50.471 ************************************ 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2712181 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2712181 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2712181 ']' 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.471 14:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.471 [2024-12-06 14:09:38.972887] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:15:50.471 [2024-12-06 14:09:38.972925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.471 [2024-12-06 14:09:39.055353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.471 [2024-12-06 14:09:39.085163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.471 [2024-12-06 14:09:39.085191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.471 [2024-12-06 14:09:39.085197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.471 [2024-12-06 14:09:39.085201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.471 [2024-12-06 14:09:39.085205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.471 [2024-12-06 14:09:39.086444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.471 [2024-12-06 14:09:39.086610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.471 [2024-12-06 14:09:39.086839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.471 [2024-12-06 14:09:39.086840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 [2024-12-06 14:09:39.820971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 [2024-12-06 14:09:39.957273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.470 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:51.470 { 00:15:51.470 "name": "Malloc1", 00:15:51.470 "aliases": [ 00:15:51.470 "74e09acd-4ff4-4860-b1bd-90dfdb20e920" 00:15:51.470 ], 00:15:51.470 "product_name": "Malloc disk", 00:15:51.470 "block_size": 512, 00:15:51.470 "num_blocks": 1048576, 00:15:51.470 "uuid": "74e09acd-4ff4-4860-b1bd-90dfdb20e920", 00:15:51.470 "assigned_rate_limits": { 00:15:51.470 "rw_ios_per_sec": 0, 00:15:51.470 "rw_mbytes_per_sec": 0, 00:15:51.470 "r_mbytes_per_sec": 0, 00:15:51.470 "w_mbytes_per_sec": 0 00:15:51.470 }, 00:15:51.470 "claimed": true, 00:15:51.470 "claim_type": "exclusive_write", 00:15:51.470 "zoned": false, 00:15:51.470 "supported_io_types": { 00:15:51.470 "read": true, 00:15:51.470 "write": true, 00:15:51.470 "unmap": true, 00:15:51.470 "flush": true, 00:15:51.470 "reset": true, 00:15:51.470 "nvme_admin": false, 00:15:51.470 "nvme_io": false, 00:15:51.470 "nvme_io_md": false, 00:15:51.470 "write_zeroes": true, 00:15:51.470 "zcopy": true, 00:15:51.470 "get_zone_info": false, 00:15:51.470 "zone_management": false, 00:15:51.470 "zone_append": false, 00:15:51.470 "compare": false, 00:15:51.470 "compare_and_write": false, 00:15:51.471 "abort": true, 00:15:51.471 "seek_hole": false, 00:15:51.471 "seek_data": false, 00:15:51.471 "copy": true, 00:15:51.471 "nvme_iov_md": false 00:15:51.471 }, 00:15:51.471 "memory_domains": [ 00:15:51.471 { 00:15:51.471 "dma_device_id": "system", 00:15:51.471 "dma_device_type": 1 00:15:51.471 }, 00:15:51.471 { 00:15:51.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.471 "dma_device_type": 2 00:15:51.471 } 00:15:51.471 ], 00:15:51.471 "driver_specific": {} 00:15:51.471 } 00:15:51.471 ]' 00:15:51.471 14:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:51.471 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:51.471 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:51.759 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:51.759 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:51.759 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:51.759 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:51.759 14:09:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.152 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.152 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:53.152 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.152 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:53.152 14:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:55.066 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:55.327 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:55.899 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.841 ************************************ 00:15:56.841 START TEST filesystem_in_capsule_ext4 00:15:56.841 ************************************ 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:56.841 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:56.841 mke2fs 1.47.0 (5-Feb-2023) 00:15:56.841 Discarding device blocks: 0/522240 done 00:15:56.841 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:56.841 Filesystem UUID: 31859b48-3e2b-481a-bd47-97676ee260b1 00:15:56.841 Superblock backups stored on blocks: 00:15:56.841 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:56.841 00:15:56.841 Allocating group tables: 0/64 done 00:15:56.841 Writing inode tables: 0/64 done 00:16:00.141 Creating journal (8192 blocks): done 00:16:01.652 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:16:01.652 00:16:01.652 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:01.652 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:06.933 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:06.933 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:06.933 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2712181 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:07.193 00:16:07.193 real 0m10.302s 00:16:07.193 user 0m0.029s 00:16:07.193 sys 0m0.056s 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:07.193 ************************************ 00:16:07.193 END TEST filesystem_in_capsule_ext4 00:16:07.193 ************************************ 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.193 ************************************ 00:16:07.193 START TEST filesystem_in_capsule_btrfs 00:16:07.193 ************************************ 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:07.193 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:07.453 btrfs-progs v6.8.1 00:16:07.453 See https://btrfs.readthedocs.io for more information. 00:16:07.453 00:16:07.453 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:07.453 NOTE: several default settings have changed in version 5.15, please make sure 00:16:07.453 this does not affect your deployments: 00:16:07.453 - DUP for metadata (-m dup) 00:16:07.453 - enabled no-holes (-O no-holes) 00:16:07.453 - enabled free-space-tree (-R free-space-tree) 00:16:07.453 00:16:07.453 Label: (null) 00:16:07.453 UUID: 23333175-7ae0-4ab8-af06-b966d8193d4c 00:16:07.453 Node size: 16384 00:16:07.453 Sector size: 4096 (CPU page size: 4096) 00:16:07.453 Filesystem size: 510.00MiB 00:16:07.453 Block group profiles: 00:16:07.453 Data: single 8.00MiB 00:16:07.453 Metadata: DUP 32.00MiB 00:16:07.453 System: DUP 8.00MiB 00:16:07.453 SSD detected: yes 00:16:07.453 Zoned device: no 00:16:07.453 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:07.453 Checksum: crc32c 00:16:07.453 Number of devices: 1 00:16:07.453 Devices: 00:16:07.453 ID SIZE PATH 00:16:07.453 1 510.00MiB /dev/nvme0n1p1 00:16:07.453 00:16:07.453 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:07.453 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:07.453 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:07.453 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2712181 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:07.713 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:07.714 00:16:07.714 real 0m0.447s 00:16:07.714 user 0m0.020s 00:16:07.714 sys 0m0.066s 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:07.714 ************************************ 00:16:07.714 END TEST filesystem_in_capsule_btrfs 00:16:07.714 ************************************ 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.714 ************************************ 00:16:07.714 START TEST filesystem_in_capsule_xfs 00:16:07.714 ************************************ 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:07.714 14:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:07.714 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:07.714 = sectsz=512 attr=2, projid32bit=1 00:16:07.714 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:07.714 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:07.714 data = bsize=4096 blocks=130560, imaxpct=25 00:16:07.714 = sunit=0 swidth=0 blks 00:16:07.714 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:07.714 log =internal log bsize=4096 blocks=16384, version=2 00:16:07.714 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:07.714 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:08.654 Discarding blocks...Done. 00:16:08.654 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:08.654 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:10.564 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:10.564 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:10.564 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:10.564 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2712181 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:10.565 00:16:10.565 real 0m2.975s 00:16:10.565 user 0m0.026s 00:16:10.565 sys 0m0.054s 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.565 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:10.565 ************************************ 00:16:10.565 END TEST filesystem_in_capsule_xfs 00:16:10.565 ************************************ 00:16:10.825 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:10.825 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:10.825 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2712181 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2712181 ']' 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2712181 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712181 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712181' 00:16:11.086 killing process with pid 2712181 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2712181 00:16:11.086 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2712181 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:11.347 00:16:11.347 real 0m20.873s 00:16:11.347 user 1m22.642s 00:16:11.347 sys 0m1.297s 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:11.347 ************************************ 00:16:11.347 END TEST nvmf_filesystem_in_capsule 00:16:11.347 ************************************ 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.347 rmmod nvme_tcp 00:16:11.347 rmmod nvme_fabrics 00:16:11.347 rmmod nvme_keyring 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.347 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.348 14:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.893 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.893 00:16:13.893 real 0m49.393s 00:16:13.893 user 2m36.866s 00:16:13.893 sys 0m8.512s 00:16:13.893 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.893 14:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:13.893 ************************************ 00:16:13.893 END TEST nvmf_filesystem 00:16:13.893 ************************************ 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.893 ************************************ 00:16:13.893 START TEST nvmf_target_discovery 00:16:13.893 ************************************ 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:13.893 * Looking for test storage... 00:16:13.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.893 --rc genhtml_branch_coverage=1 00:16:13.893 --rc genhtml_function_coverage=1 00:16:13.893 --rc genhtml_legend=1 00:16:13.893 --rc geninfo_all_blocks=1 00:16:13.893 --rc geninfo_unexecuted_blocks=1 00:16:13.893 00:16:13.893 ' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.893 --rc genhtml_branch_coverage=1 00:16:13.893 --rc genhtml_function_coverage=1 00:16:13.893 --rc genhtml_legend=1 00:16:13.893 --rc geninfo_all_blocks=1 00:16:13.893 --rc geninfo_unexecuted_blocks=1 00:16:13.893 00:16:13.893 ' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.893 --rc genhtml_branch_coverage=1 00:16:13.893 --rc genhtml_function_coverage=1 00:16:13.893 --rc genhtml_legend=1 00:16:13.893 --rc geninfo_all_blocks=1 00:16:13.893 --rc geninfo_unexecuted_blocks=1 00:16:13.893 00:16:13.893 ' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.893 --rc genhtml_branch_coverage=1 00:16:13.893 --rc genhtml_function_coverage=1 00:16:13.893 --rc genhtml_legend=1 00:16:13.893 --rc geninfo_all_blocks=1 00:16:13.893 --rc geninfo_unexecuted_blocks=1 00:16:13.893 00:16:13.893 ' 00:16:13.893 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.894 14:10:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:22.031 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:22.031 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.031 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:22.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:22.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:16:22.032 00:16:22.032 --- 10.0.0.2 ping statistics --- 00:16:22.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.032 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:16:22.032 00:16:22.032 --- 10.0.0.1 ping statistics --- 00:16:22.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.032 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2720643 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2720643 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2720643 ']' 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.032 14:10:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.032 [2024-12-06 14:10:09.890915] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:16:22.032 [2024-12-06 14:10:09.890984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.032 [2024-12-06 14:10:09.989240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.032 [2024-12-06 14:10:10.048151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.032 [2024-12-06 14:10:10.048212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.032 [2024-12-06 14:10:10.048221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.032 [2024-12-06 14:10:10.048229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.032 [2024-12-06 14:10:10.048235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.032 [2024-12-06 14:10:10.050662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.032 [2024-12-06 14:10:10.050825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.032 [2024-12-06 14:10:10.050990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.032 [2024-12-06 14:10:10.050990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 [2024-12-06 14:10:10.765870] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 Null1 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 [2024-12-06 14:10:10.833684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 Null2 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 Null3 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.294 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 Null4 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.556 14:10:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.556 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.556 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:22.556 00:16:22.556 Discovery Log Number of Records 6, Generation counter 6 00:16:22.556 =====Discovery Log Entry 0====== 00:16:22.556 trtype: tcp 00:16:22.556 adrfam: ipv4 00:16:22.556 subtype: current discovery subsystem 00:16:22.556 treq: not required 00:16:22.556 portid: 0 00:16:22.556 trsvcid: 4420 00:16:22.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:22.556 traddr: 10.0.0.2 00:16:22.556 eflags: explicit discovery connections, duplicate discovery information 00:16:22.556 sectype: none 00:16:22.556 =====Discovery Log Entry 1====== 00:16:22.556 trtype: tcp 00:16:22.556 adrfam: ipv4 00:16:22.556 subtype: nvme subsystem 00:16:22.556 treq: not required 00:16:22.556 portid: 0 00:16:22.556 trsvcid: 4420 00:16:22.556 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:22.556 traddr: 10.0.0.2 00:16:22.556 eflags: none 00:16:22.556 sectype: none 00:16:22.556 =====Discovery Log Entry 2====== 00:16:22.556 trtype: tcp 00:16:22.556 adrfam: ipv4 00:16:22.556 subtype: nvme subsystem 00:16:22.556 treq: not required 00:16:22.556 portid: 0 00:16:22.556 trsvcid: 4420 00:16:22.556 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:22.556 traddr: 10.0.0.2 00:16:22.556 eflags: none 00:16:22.556 sectype: none 00:16:22.556 =====Discovery Log Entry 3====== 00:16:22.556 trtype: tcp 00:16:22.556 adrfam: ipv4 00:16:22.556 subtype: nvme subsystem 00:16:22.556 treq: not required 00:16:22.557 portid: 0 00:16:22.557 trsvcid: 4420 00:16:22.557 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:22.557 traddr: 10.0.0.2 00:16:22.557 eflags: none 00:16:22.557 sectype: none 00:16:22.557 =====Discovery Log Entry 4====== 00:16:22.557 trtype: tcp 00:16:22.557 adrfam: ipv4 00:16:22.557 subtype: nvme subsystem 00:16:22.557 treq: not required 00:16:22.557 portid: 0 00:16:22.557 trsvcid: 4420 00:16:22.557 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:22.557 traddr: 10.0.0.2 00:16:22.557 eflags: none 00:16:22.557 sectype: none 00:16:22.557 =====Discovery Log Entry 5====== 00:16:22.557 trtype: tcp 00:16:22.557 adrfam: ipv4 00:16:22.557 subtype: discovery subsystem referral 00:16:22.557 treq: not required 00:16:22.557 portid: 0 00:16:22.557 trsvcid: 4430 00:16:22.557 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:22.557 traddr: 10.0.0.2 00:16:22.557 eflags: none 00:16:22.557 sectype: none 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:22.557 Perform nvmf subsystem discovery via RPC 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.557 [ 00:16:22.557 { 00:16:22.557 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:22.557 "subtype": "Discovery", 00:16:22.557 "listen_addresses": [ 00:16:22.557 { 00:16:22.557 "trtype": "TCP", 00:16:22.557 "adrfam": "IPv4", 00:16:22.557 "traddr": "10.0.0.2", 00:16:22.557 "trsvcid": "4420" 00:16:22.557 } 00:16:22.557 ], 00:16:22.557 "allow_any_host": true, 00:16:22.557 "hosts": [] 00:16:22.557 }, 00:16:22.557 { 00:16:22.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.557 "subtype": "NVMe", 00:16:22.557 "listen_addresses": [ 00:16:22.557 { 00:16:22.557 "trtype": "TCP", 00:16:22.557 "adrfam": "IPv4", 00:16:22.557 "traddr": "10.0.0.2", 00:16:22.557 "trsvcid": "4420" 00:16:22.557 } 00:16:22.557 ], 00:16:22.557 "allow_any_host": true, 00:16:22.557 "hosts": [], 00:16:22.557 "serial_number": "SPDK00000000000001", 00:16:22.557 "model_number": "SPDK bdev Controller", 00:16:22.557 "max_namespaces": 32, 00:16:22.557 "min_cntlid": 1, 00:16:22.557 "max_cntlid": 65519, 00:16:22.557 "namespaces": [ 00:16:22.557 { 00:16:22.557 "nsid": 1, 00:16:22.557 "bdev_name": "Null1", 00:16:22.557 "name": "Null1", 00:16:22.557 "nguid": "3E508E7B490B4622A4547C0E1EFCB001", 00:16:22.557 "uuid": "3e508e7b-490b-4622-a454-7c0e1efcb001" 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 }, 00:16:22.557 { 00:16:22.557 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:22.557 "subtype": "NVMe", 00:16:22.557 "listen_addresses": [ 00:16:22.557 { 00:16:22.557 "trtype": "TCP", 00:16:22.557 "adrfam": "IPv4", 00:16:22.557 "traddr": "10.0.0.2", 00:16:22.557 "trsvcid": "4420" 00:16:22.557 } 00:16:22.557 ], 00:16:22.557 "allow_any_host": true, 00:16:22.557 "hosts": [], 00:16:22.557 "serial_number": "SPDK00000000000002", 00:16:22.557 "model_number": "SPDK bdev Controller", 00:16:22.557 "max_namespaces": 32, 00:16:22.557 "min_cntlid": 1, 00:16:22.557 "max_cntlid": 65519, 00:16:22.557 "namespaces": [ 00:16:22.557 { 00:16:22.557 "nsid": 1, 00:16:22.557 "bdev_name": "Null2", 00:16:22.557 "name": "Null2", 00:16:22.557 "nguid": "EA0304615CD244BA8C9B2FEE03FF831B", 00:16:22.557 "uuid": "ea030461-5cd2-44ba-8c9b-2fee03ff831b" 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 }, 00:16:22.557 { 00:16:22.557 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:22.557 "subtype": "NVMe", 00:16:22.557 "listen_addresses": [ 00:16:22.557 { 00:16:22.557 "trtype": "TCP", 00:16:22.557 "adrfam": "IPv4", 00:16:22.557 "traddr": "10.0.0.2", 00:16:22.557 "trsvcid": "4420" 00:16:22.557 } 00:16:22.557 ], 00:16:22.557 "allow_any_host": true, 00:16:22.557 "hosts": [], 00:16:22.557 "serial_number": "SPDK00000000000003", 00:16:22.557 "model_number": "SPDK bdev Controller", 00:16:22.557 "max_namespaces": 32, 00:16:22.557 "min_cntlid": 1, 00:16:22.557 "max_cntlid": 65519, 00:16:22.557 "namespaces": [ 00:16:22.557 { 00:16:22.557 "nsid": 1, 00:16:22.557 "bdev_name": "Null3", 00:16:22.557 "name": "Null3", 00:16:22.557 "nguid": "05E0D6686E1F46E0A7DC1A419B822046", 00:16:22.557 "uuid": "05e0d668-6e1f-46e0-a7dc-1a419b822046" 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 }, 00:16:22.557 { 00:16:22.557 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:22.557 "subtype": "NVMe", 00:16:22.557 "listen_addresses": [ 00:16:22.557 { 00:16:22.557 "trtype": "TCP", 00:16:22.557 "adrfam": "IPv4", 00:16:22.557 "traddr": "10.0.0.2", 00:16:22.557 "trsvcid": "4420" 00:16:22.557 } 00:16:22.557 ], 00:16:22.557 "allow_any_host": true, 00:16:22.557 "hosts": [], 00:16:22.557 "serial_number": "SPDK00000000000004", 00:16:22.557 "model_number": "SPDK bdev Controller", 00:16:22.557 "max_namespaces": 32, 00:16:22.557 "min_cntlid": 1, 00:16:22.557 "max_cntlid": 65519, 00:16:22.557 "namespaces": [ 00:16:22.557 { 00:16:22.557 "nsid": 1, 00:16:22.557 "bdev_name": "Null4", 00:16:22.557 "name": "Null4", 00:16:22.557 "nguid": "6048C648B2D4423588D7F40F56F5A2D3", 00:16:22.557 "uuid": "6048c648-b2d4-4235-88d7-f40f56f5a2d3" 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 } 00:16:22.557 ] 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.557 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.818 rmmod nvme_tcp 00:16:22.818 rmmod nvme_fabrics 00:16:22.818 rmmod nvme_keyring 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2720643 ']' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2720643 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2720643 ']' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2720643 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.818 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720643 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720643' 00:16:23.079 killing process with pid 2720643 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2720643 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2720643 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.079 14:10:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:25.624 00:16:25.624 real 0m11.640s 00:16:25.624 user 0m8.555s 00:16:25.624 sys 0m6.186s 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:25.624 ************************************ 00:16:25.624 END TEST nvmf_target_discovery 00:16:25.624 ************************************ 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.624 ************************************ 00:16:25.624 START TEST nvmf_referrals 00:16:25.624 ************************************ 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:25.624 * Looking for test storage... 00:16:25.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:25.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.624 --rc genhtml_branch_coverage=1 00:16:25.624 --rc genhtml_function_coverage=1 00:16:25.624 --rc genhtml_legend=1 00:16:25.624 --rc geninfo_all_blocks=1 00:16:25.624 --rc geninfo_unexecuted_blocks=1 00:16:25.624 00:16:25.624 ' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:25.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.624 --rc genhtml_branch_coverage=1 00:16:25.624 --rc genhtml_function_coverage=1 00:16:25.624 --rc genhtml_legend=1 00:16:25.624 --rc geninfo_all_blocks=1 00:16:25.624 --rc geninfo_unexecuted_blocks=1 00:16:25.624 00:16:25.624 ' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:25.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.624 --rc genhtml_branch_coverage=1 00:16:25.624 --rc genhtml_function_coverage=1 00:16:25.624 --rc genhtml_legend=1 00:16:25.624 --rc geninfo_all_blocks=1 00:16:25.624 --rc geninfo_unexecuted_blocks=1 00:16:25.624 00:16:25.624 ' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:25.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.624 --rc genhtml_branch_coverage=1 00:16:25.624 --rc genhtml_function_coverage=1 00:16:25.624 --rc genhtml_legend=1 00:16:25.624 --rc geninfo_all_blocks=1 00:16:25.624 --rc geninfo_unexecuted_blocks=1 00:16:25.624 00:16:25.624 ' 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.624 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.625 14:10:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.625 14:10:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:33.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.773 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:33.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:33.774 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:33.774 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:33.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:16:33.774 00:16:33.774 --- 10.0.0.2 ping statistics --- 00:16:33.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.774 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:16:33.774 00:16:33.774 --- 10.0.0.1 ping statistics --- 00:16:33.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.774 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2725157 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2725157 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2725157 ']' 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.774 14:10:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:33.774 [2024-12-06 14:10:21.675880] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:16:33.774 [2024-12-06 14:10:21.675945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.774 [2024-12-06 14:10:21.776183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.774 [2024-12-06 14:10:21.830793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.774 [2024-12-06 14:10:21.830848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.774 [2024-12-06 14:10:21.830863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.774 [2024-12-06 14:10:21.830870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.774 [2024-12-06 14:10:21.830876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.774 [2024-12-06 14:10:21.832949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.774 [2024-12-06 14:10:21.833103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.774 [2024-12-06 14:10:21.833265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.774 [2024-12-06 14:10:21.833266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 [2024-12-06 14:10:22.555496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 [2024-12-06 14:10:22.583745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.036 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.316 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.577 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:34.577 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:34.577 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:34.578 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:34.578 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:34.578 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:34.578 14:10:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:34.578 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:34.838 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:34.839 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:35.100 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:35.361 14:10:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:35.622 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:35.884 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.145 rmmod nvme_tcp 00:16:36.145 rmmod nvme_fabrics 00:16:36.145 rmmod nvme_keyring 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2725157 ']' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2725157 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2725157 ']' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2725157 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725157 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725157' 00:16:36.145 killing process with pid 2725157 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2725157 00:16:36.145 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2725157 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.423 14:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.337 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.337 00:16:38.337 real 0m13.169s 00:16:38.337 user 0m15.203s 00:16:38.337 sys 0m6.434s 00:16:38.337 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.337 14:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:38.337 ************************************ 00:16:38.337 END TEST nvmf_referrals 00:16:38.337 ************************************ 00:16:38.598 14:10:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:38.598 14:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.598 14:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.598 14:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.598 ************************************ 00:16:38.598 START TEST nvmf_connect_disconnect 00:16:38.598 ************************************ 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:38.598 * Looking for test storage... 00:16:38.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.598 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.599 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:38.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.861 --rc genhtml_branch_coverage=1 00:16:38.861 --rc genhtml_function_coverage=1 00:16:38.861 --rc genhtml_legend=1 00:16:38.861 --rc geninfo_all_blocks=1 00:16:38.861 --rc geninfo_unexecuted_blocks=1 00:16:38.861 00:16:38.861 ' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:38.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.861 --rc genhtml_branch_coverage=1 00:16:38.861 --rc genhtml_function_coverage=1 00:16:38.861 --rc genhtml_legend=1 00:16:38.861 --rc geninfo_all_blocks=1 00:16:38.861 --rc geninfo_unexecuted_blocks=1 00:16:38.861 00:16:38.861 ' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:38.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.861 --rc genhtml_branch_coverage=1 00:16:38.861 --rc genhtml_function_coverage=1 00:16:38.861 --rc genhtml_legend=1 00:16:38.861 --rc geninfo_all_blocks=1 00:16:38.861 --rc geninfo_unexecuted_blocks=1 00:16:38.861 00:16:38.861 ' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:38.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.861 --rc genhtml_branch_coverage=1 00:16:38.861 --rc genhtml_function_coverage=1 00:16:38.861 --rc genhtml_legend=1 00:16:38.861 --rc geninfo_all_blocks=1 00:16:38.861 --rc geninfo_unexecuted_blocks=1 00:16:38.861 00:16:38.861 ' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.861 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.862 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:47.024 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:47.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:47.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:47.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:47.025 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:47.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:16:47.025 00:16:47.025 --- 10.0.0.2 ping statistics --- 00:16:47.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.025 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:16:47.025 00:16:47.025 --- 10.0.0.1 ping statistics --- 00:16:47.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.025 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:47.025 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2730102 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2730102 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2730102 ']' 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.026 14:10:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.026 [2024-12-06 14:10:34.876437] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:16:47.026 [2024-12-06 14:10:34.876514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.026 [2024-12-06 14:10:34.978379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.026 [2024-12-06 14:10:35.031766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.026 [2024-12-06 14:10:35.031815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.026 [2024-12-06 14:10:35.031823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.026 [2024-12-06 14:10:35.031830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.026 [2024-12-06 14:10:35.031837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.026 [2024-12-06 14:10:35.033897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.026 [2024-12-06 14:10:35.034057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.026 [2024-12-06 14:10:35.034220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.026 [2024-12-06 14:10:35.034221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 [2024-12-06 14:10:35.756438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:47.286 [2024-12-06 14:10:35.833326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.286 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.287 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:47.287 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:47.287 14:10:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:51.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.700 rmmod nvme_tcp 00:17:05.700 rmmod nvme_fabrics 00:17:05.700 rmmod nvme_keyring 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2730102 ']' 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2730102 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2730102 ']' 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2730102 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.700 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730102 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730102' 00:17:05.700 killing process with pid 2730102 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2730102 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2730102 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.700 14:10:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.610 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.610 00:17:07.610 real 0m29.188s 00:17:07.610 user 1m18.233s 00:17:07.610 sys 0m7.024s 00:17:07.610 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.610 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:07.610 ************************************ 00:17:07.610 END TEST nvmf_connect_disconnect 00:17:07.610 ************************************ 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.872 ************************************ 00:17:07.872 START TEST nvmf_multitarget 00:17:07.872 ************************************ 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:07.872 * Looking for test storage... 00:17:07.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.872 --rc genhtml_branch_coverage=1 00:17:07.872 --rc genhtml_function_coverage=1 00:17:07.872 --rc genhtml_legend=1 00:17:07.872 --rc geninfo_all_blocks=1 00:17:07.872 --rc geninfo_unexecuted_blocks=1 00:17:07.872 00:17:07.872 ' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.872 --rc genhtml_branch_coverage=1 00:17:07.872 --rc genhtml_function_coverage=1 00:17:07.872 --rc genhtml_legend=1 00:17:07.872 --rc geninfo_all_blocks=1 00:17:07.872 --rc geninfo_unexecuted_blocks=1 00:17:07.872 00:17:07.872 ' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.872 --rc genhtml_branch_coverage=1 00:17:07.872 --rc genhtml_function_coverage=1 00:17:07.872 --rc genhtml_legend=1 00:17:07.872 --rc geninfo_all_blocks=1 00:17:07.872 --rc geninfo_unexecuted_blocks=1 00:17:07.872 00:17:07.872 ' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.872 --rc genhtml_branch_coverage=1 00:17:07.872 --rc genhtml_function_coverage=1 00:17:07.872 --rc genhtml_legend=1 00:17:07.872 --rc geninfo_all_blocks=1 00:17:07.872 --rc geninfo_unexecuted_blocks=1 00:17:07.872 00:17:07.872 ' 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.872 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.134 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.135 14:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:16.282 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:16.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:16.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:16.283 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:16.283 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:16.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:17:16.283 00:17:16.283 --- 10.0.0.2 ping statistics --- 00:17:16.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.283 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:17:16.283 00:17:16.283 --- 10.0.0.1 ping statistics --- 00:17:16.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.283 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2738059 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2738059 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2738059 ']' 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.283 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:16.283 [2024-12-06 14:11:04.040251] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:17:16.283 [2024-12-06 14:11:04.040318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.283 [2024-12-06 14:11:04.144110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.283 [2024-12-06 14:11:04.197438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.283 [2024-12-06 14:11:04.197501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.283 [2024-12-06 14:11:04.197510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.284 [2024-12-06 14:11:04.197518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.284 [2024-12-06 14:11:04.197524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.284 [2024-12-06 14:11:04.199523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.284 [2024-12-06 14:11:04.199750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.284 [2024-12-06 14:11:04.199945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.284 [2024-12-06 14:11:04.199947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:16.284 14:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:16.546 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:16.546 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:16.546 "nvmf_tgt_1" 00:17:16.546 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:16.808 "nvmf_tgt_2" 00:17:16.808 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:16.808 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:16.808 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:16.808 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:17.068 true 00:17:17.068 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:17.068 true 00:17:17.068 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:17.068 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:17.329 rmmod nvme_tcp 00:17:17.329 rmmod nvme_fabrics 00:17:17.329 rmmod nvme_keyring 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2738059 ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2738059 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2738059 ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2738059 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738059 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738059' 00:17:17.329 killing process with pid 2738059 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2738059 00:17:17.329 14:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2738059 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.590 14:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.504 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.504 00:17:19.504 real 0m11.808s 00:17:19.504 user 0m10.258s 00:17:19.504 sys 0m6.188s 00:17:19.504 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.504 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:19.504 ************************************ 00:17:19.504 END TEST nvmf_multitarget 00:17:19.504 ************************************ 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.764 ************************************ 00:17:19.764 START TEST nvmf_rpc 00:17:19.764 ************************************ 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:19.764 * Looking for test storage... 00:17:19.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:19.764 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.026 --rc genhtml_branch_coverage=1 00:17:20.026 --rc genhtml_function_coverage=1 00:17:20.026 --rc genhtml_legend=1 00:17:20.026 --rc geninfo_all_blocks=1 00:17:20.026 --rc geninfo_unexecuted_blocks=1 00:17:20.026 00:17:20.026 ' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.026 --rc genhtml_branch_coverage=1 00:17:20.026 --rc genhtml_function_coverage=1 00:17:20.026 --rc genhtml_legend=1 00:17:20.026 --rc geninfo_all_blocks=1 00:17:20.026 --rc geninfo_unexecuted_blocks=1 00:17:20.026 00:17:20.026 ' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.026 --rc genhtml_branch_coverage=1 00:17:20.026 --rc genhtml_function_coverage=1 00:17:20.026 --rc genhtml_legend=1 00:17:20.026 --rc geninfo_all_blocks=1 00:17:20.026 --rc geninfo_unexecuted_blocks=1 00:17:20.026 00:17:20.026 ' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:20.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.026 --rc genhtml_branch_coverage=1 00:17:20.026 --rc genhtml_function_coverage=1 00:17:20.026 --rc genhtml_legend=1 00:17:20.026 --rc geninfo_all_blocks=1 00:17:20.026 --rc geninfo_unexecuted_blocks=1 00:17:20.026 00:17:20.026 ' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.026 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:20.027 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:28.164 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:28.165 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:28.165 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:28.165 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:28.165 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:28.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:17:28.165 00:17:28.165 --- 10.0.0.2 ping statistics --- 00:17:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.165 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:17:28.165 00:17:28.165 --- 10.0.0.1 ping statistics --- 00:17:28.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.165 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.165 14:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2742759 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2742759 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2742759 ']' 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.165 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.165 [2024-12-06 14:11:16.063021] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:17:28.165 [2024-12-06 14:11:16.063085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.165 [2024-12-06 14:11:16.138406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.165 [2024-12-06 14:11:16.186260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.165 [2024-12-06 14:11:16.186315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.165 [2024-12-06 14:11:16.186322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.165 [2024-12-06 14:11:16.186327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.165 [2024-12-06 14:11:16.186332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.165 [2024-12-06 14:11:16.188173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.165 [2024-12-06 14:11:16.188334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.166 [2024-12-06 14:11:16.188511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.166 [2024-12-06 14:11:16.188552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:28.166 "tick_rate": 2400000000, 00:17:28.166 "poll_groups": [ 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_000", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_001", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_002", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_003", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [] 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 }' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 [2024-12-06 14:11:16.471965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:28.166 "tick_rate": 2400000000, 00:17:28.166 "poll_groups": [ 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_000", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [ 00:17:28.166 { 00:17:28.166 "trtype": "TCP" 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_001", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [ 00:17:28.166 { 00:17:28.166 "trtype": "TCP" 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_002", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [ 00:17:28.166 { 00:17:28.166 "trtype": "TCP" 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 }, 00:17:28.166 { 00:17:28.166 "name": "nvmf_tgt_poll_group_003", 00:17:28.166 "admin_qpairs": 0, 00:17:28.166 "io_qpairs": 0, 00:17:28.166 "current_admin_qpairs": 0, 00:17:28.166 "current_io_qpairs": 0, 00:17:28.166 "pending_bdev_io": 0, 00:17:28.166 "completed_nvme_io": 0, 00:17:28.166 "transports": [ 00:17:28.166 { 00:17:28.166 "trtype": "TCP" 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 } 00:17:28.166 ] 00:17:28.166 }' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 Malloc1 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 [2024-12-06 14:11:16.682015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:28.166 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:28.167 [2024-12-06 14:11:16.719017] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:28.167 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:28.167 could not add new controller: failed to write to nvme-fabrics device 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.167 14:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:30.082 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:30.082 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:30.082 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:30.082 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:30.082 14:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:31.995 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:31.996 [2024-12-06 14:11:20.424099] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:31.996 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:31.996 could not add new controller: failed to write to nvme-fabrics device 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.996 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.379 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:33.379 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:33.379 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.379 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:33.379 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:35.922 14:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.922 [2024-12-06 14:11:24.110270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.922 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.302 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.302 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:37.302 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.302 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:37.302 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.216 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 [2024-12-06 14:11:27.739627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.217 14:11:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:40.600 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.600 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.600 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.600 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:40.600 14:11:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 [2024-12-06 14:11:31.412216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.149 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.533 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.533 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:44.533 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.533 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:44.533 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:46.446 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.446 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:46.446 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:46.446 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:46.447 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.447 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:46.447 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:46.705 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 [2024-12-06 14:11:35.126266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.706 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:48.089 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:48.089 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.089 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.089 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:48.089 14:11:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:50.001 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.261 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:50.261 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:50.261 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:50.261 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.261 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 [2024-12-06 14:11:38.802446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.171 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.171 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.171 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.171 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.171 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 [2024-12-06 14:11:42.554990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.080 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 [2024-12-06 14:11:42.623130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 [2024-12-06 14:11:42.691331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.081 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 [2024-12-06 14:11:42.759533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.341 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 [2024-12-06 14:11:42.831745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:54.342 "tick_rate": 2400000000, 00:17:54.342 "poll_groups": [ 00:17:54.342 { 00:17:54.342 "name": "nvmf_tgt_poll_group_000", 00:17:54.342 "admin_qpairs": 0, 00:17:54.342 "io_qpairs": 224, 00:17:54.342 "current_admin_qpairs": 0, 00:17:54.342 "current_io_qpairs": 0, 00:17:54.342 "pending_bdev_io": 0, 00:17:54.342 "completed_nvme_io": 358, 00:17:54.342 "transports": [ 00:17:54.342 { 00:17:54.342 "trtype": "TCP" 00:17:54.342 } 00:17:54.342 ] 00:17:54.342 }, 00:17:54.342 { 00:17:54.342 "name": "nvmf_tgt_poll_group_001", 00:17:54.342 "admin_qpairs": 1, 00:17:54.342 "io_qpairs": 223, 00:17:54.342 "current_admin_qpairs": 0, 00:17:54.342 "current_io_qpairs": 0, 00:17:54.342 "pending_bdev_io": 0, 00:17:54.342 "completed_nvme_io": 292, 00:17:54.342 "transports": [ 00:17:54.342 { 00:17:54.342 "trtype": "TCP" 00:17:54.342 } 00:17:54.342 ] 00:17:54.342 }, 00:17:54.342 { 00:17:54.342 "name": "nvmf_tgt_poll_group_002", 00:17:54.342 "admin_qpairs": 6, 00:17:54.342 "io_qpairs": 218, 00:17:54.342 "current_admin_qpairs": 0, 00:17:54.342 "current_io_qpairs": 0, 00:17:54.342 "pending_bdev_io": 0, 00:17:54.342 "completed_nvme_io": 365, 00:17:54.342 "transports": [ 00:17:54.342 { 00:17:54.342 "trtype": "TCP" 00:17:54.342 } 00:17:54.342 ] 00:17:54.342 }, 00:17:54.342 { 00:17:54.342 "name": "nvmf_tgt_poll_group_003", 00:17:54.342 "admin_qpairs": 0, 00:17:54.342 "io_qpairs": 224, 00:17:54.342 "current_admin_qpairs": 0, 00:17:54.342 "current_io_qpairs": 0, 00:17:54.342 "pending_bdev_io": 0, 00:17:54.342 "completed_nvme_io": 224, 00:17:54.342 "transports": [ 00:17:54.342 { 00:17:54.342 "trtype": "TCP" 00:17:54.342 } 00:17:54.342 ] 00:17:54.342 } 00:17:54.342 ] 00:17:54.342 }' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:54.342 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.602 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:54.603 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.603 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:54.603 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.603 14:11:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.603 rmmod nvme_tcp 00:17:54.603 rmmod nvme_fabrics 00:17:54.603 rmmod nvme_keyring 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2742759 ']' 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2742759 ']' 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742759' 00:17:54.603 killing process with pid 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2742759 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:54.603 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:54.863 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:54.863 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:54.863 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.863 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.863 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:56.776 00:17:56.776 real 0m37.114s 00:17:56.776 user 1m49.955s 00:17:56.776 sys 0m7.656s 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.776 ************************************ 00:17:56.776 END TEST nvmf_rpc 00:17:56.776 ************************************ 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:56.776 ************************************ 00:17:56.776 START TEST nvmf_invalid 00:17:56.776 ************************************ 00:17:56.776 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:57.037 * Looking for test storage... 00:17:57.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.037 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.038 --rc genhtml_branch_coverage=1 00:17:57.038 --rc genhtml_function_coverage=1 00:17:57.038 --rc genhtml_legend=1 00:17:57.038 --rc geninfo_all_blocks=1 00:17:57.038 --rc geninfo_unexecuted_blocks=1 00:17:57.038 00:17:57.038 ' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.038 --rc genhtml_branch_coverage=1 00:17:57.038 --rc genhtml_function_coverage=1 00:17:57.038 --rc genhtml_legend=1 00:17:57.038 --rc geninfo_all_blocks=1 00:17:57.038 --rc geninfo_unexecuted_blocks=1 00:17:57.038 00:17:57.038 ' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.038 --rc genhtml_branch_coverage=1 00:17:57.038 --rc genhtml_function_coverage=1 00:17:57.038 --rc genhtml_legend=1 00:17:57.038 --rc geninfo_all_blocks=1 00:17:57.038 --rc geninfo_unexecuted_blocks=1 00:17:57.038 00:17:57.038 ' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:57.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.038 --rc genhtml_branch_coverage=1 00:17:57.038 --rc genhtml_function_coverage=1 00:17:57.038 --rc genhtml_legend=1 00:17:57.038 --rc geninfo_all_blocks=1 00:17:57.038 --rc geninfo_unexecuted_blocks=1 00:17:57.038 00:17:57.038 ' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.038 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:05.285 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:05.285 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:05.285 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:05.285 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:05.285 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.286 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:18:05.286 00:18:05.286 --- 10.0.0.2 ping statistics --- 00:18:05.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.286 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:18:05.286 00:18:05.286 --- 10.0.0.1 ping statistics --- 00:18:05.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.286 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2752287 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2752287 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2752287 ']' 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.286 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.286 [2024-12-06 14:11:53.202630] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:05.286 [2024-12-06 14:11:53.202697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.286 [2024-12-06 14:11:53.305615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.286 [2024-12-06 14:11:53.357860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.286 [2024-12-06 14:11:53.357915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.286 [2024-12-06 14:11:53.357924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.286 [2024-12-06 14:11:53.357931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.286 [2024-12-06 14:11:53.357937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.286 [2024-12-06 14:11:53.360016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.286 [2024-12-06 14:11:53.360176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.286 [2024-12-06 14:11:53.360338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.286 [2024-12-06 14:11:53.360339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.555 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:05.556 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30788 00:18:05.816 [2024-12-06 14:11:54.238602] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:05.816 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:05.816 { 00:18:05.816 "nqn": "nqn.2016-06.io.spdk:cnode30788", 00:18:05.816 "tgt_name": "foobar", 00:18:05.816 "method": "nvmf_create_subsystem", 00:18:05.816 "req_id": 1 00:18:05.816 } 00:18:05.816 Got JSON-RPC error response 00:18:05.816 response: 00:18:05.816 { 00:18:05.816 "code": -32603, 00:18:05.816 "message": "Unable to find target foobar" 00:18:05.816 }' 00:18:05.816 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:05.816 { 00:18:05.816 "nqn": "nqn.2016-06.io.spdk:cnode30788", 00:18:05.816 "tgt_name": "foobar", 00:18:05.816 "method": "nvmf_create_subsystem", 00:18:05.816 "req_id": 1 00:18:05.816 } 00:18:05.816 Got JSON-RPC error response 00:18:05.816 response: 00:18:05.816 { 00:18:05.816 "code": -32603, 00:18:05.816 "message": "Unable to find target foobar" 00:18:05.816 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:05.816 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:05.816 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25817 00:18:05.816 [2024-12-06 14:11:54.451469] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25817: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:06.077 { 00:18:06.077 "nqn": "nqn.2016-06.io.spdk:cnode25817", 00:18:06.077 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:06.077 "method": "nvmf_create_subsystem", 00:18:06.077 "req_id": 1 00:18:06.077 } 00:18:06.077 Got JSON-RPC error response 00:18:06.077 response: 00:18:06.077 { 00:18:06.077 "code": -32602, 00:18:06.077 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:06.077 }' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:06.077 { 00:18:06.077 "nqn": "nqn.2016-06.io.spdk:cnode25817", 00:18:06.077 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:06.077 "method": "nvmf_create_subsystem", 00:18:06.077 "req_id": 1 00:18:06.077 } 00:18:06.077 Got JSON-RPC error response 00:18:06.077 response: 00:18:06.077 { 00:18:06.077 "code": -32602, 00:18:06.077 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:06.077 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16511 00:18:06.077 [2024-12-06 14:11:54.660190] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16511: invalid model number 'SPDK_Controller' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:06.077 { 00:18:06.077 "nqn": "nqn.2016-06.io.spdk:cnode16511", 00:18:06.077 "model_number": "SPDK_Controller\u001f", 00:18:06.077 "method": "nvmf_create_subsystem", 00:18:06.077 "req_id": 1 00:18:06.077 } 00:18:06.077 Got JSON-RPC error response 00:18:06.077 response: 00:18:06.077 { 00:18:06.077 "code": -32602, 00:18:06.077 "message": "Invalid MN SPDK_Controller\u001f" 00:18:06.077 }' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:06.077 { 00:18:06.077 "nqn": "nqn.2016-06.io.spdk:cnode16511", 00:18:06.077 "model_number": "SPDK_Controller\u001f", 00:18:06.077 "method": "nvmf_create_subsystem", 00:18:06.077 "req_id": 1 00:18:06.077 } 00:18:06.077 Got JSON-RPC error response 00:18:06.077 response: 00:18:06.077 { 00:18:06.077 "code": -32602, 00:18:06.077 "message": "Invalid MN SPDK_Controller\u001f" 00:18:06.077 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.077 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:06.338 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm!FL@-7jvhE-{{<[e^6!+' 00:18:06.339 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'm!FL@-7jvhE-{{<[e^6!+' nqn.2016-06.io.spdk:cnode28961 00:18:06.601 [2024-12-06 14:11:55.045582] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28961: invalid serial number 'm!FL@-7jvhE-{{<[e^6!+' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:06.601 { 00:18:06.601 "nqn": "nqn.2016-06.io.spdk:cnode28961", 00:18:06.601 "serial_number": "m!FL@-7jvhE-{{<[e^6!+", 00:18:06.601 "method": "nvmf_create_subsystem", 00:18:06.601 "req_id": 1 00:18:06.601 } 00:18:06.601 Got JSON-RPC error response 00:18:06.601 response: 00:18:06.601 { 00:18:06.601 "code": -32602, 00:18:06.601 "message": "Invalid SN m!FL@-7jvhE-{{<[e^6!+" 00:18:06.601 }' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:06.601 { 00:18:06.601 "nqn": "nqn.2016-06.io.spdk:cnode28961", 00:18:06.601 "serial_number": "m!FL@-7jvhE-{{<[e^6!+", 00:18:06.601 "method": "nvmf_create_subsystem", 00:18:06.601 "req_id": 1 00:18:06.601 } 00:18:06.601 Got JSON-RPC error response 00:18:06.601 response: 00:18:06.601 { 00:18:06.601 "code": -32602, 00:18:06.601 "message": "Invalid SN m!FL@-7jvhE-{{<[e^6!+" 00:18:06.601 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:06.601 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:06.863 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '8bpxsy+p6=U+JK5}>)"lEomEK70\J$>-5X AFeVB!' 00:18:06.864 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '8bpxsy+p6=U+JK5}>)"lEomEK70\J$>-5X AFeVB!' nqn.2016-06.io.spdk:cnode16558 00:18:07.124 [2024-12-06 14:11:55.571389] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16558: invalid model number '8bpxsy+p6=U+JK5}>)"lEomEK70\J$>-5X AFeVB!' 00:18:07.124 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:07.124 { 00:18:07.124 "nqn": "nqn.2016-06.io.spdk:cnode16558", 00:18:07.124 "model_number": "8bpxsy+p6=U+JK5}>)\"lEomEK70\\J$>-5X AFeVB!", 00:18:07.124 "method": "nvmf_create_subsystem", 00:18:07.124 "req_id": 1 00:18:07.124 } 00:18:07.124 Got JSON-RPC error response 00:18:07.124 response: 00:18:07.124 { 00:18:07.124 "code": -32602, 00:18:07.124 "message": "Invalid MN 8bpxsy+p6=U+JK5}>)\"lEomEK70\\J$>-5X AFeVB!" 00:18:07.124 }' 00:18:07.124 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:07.124 { 00:18:07.124 "nqn": "nqn.2016-06.io.spdk:cnode16558", 00:18:07.124 "model_number": "8bpxsy+p6=U+JK5}>)\"lEomEK70\\J$>-5X AFeVB!", 00:18:07.124 "method": "nvmf_create_subsystem", 00:18:07.124 "req_id": 1 00:18:07.124 } 00:18:07.124 Got JSON-RPC error response 00:18:07.124 response: 00:18:07.124 { 00:18:07.124 "code": -32602, 00:18:07.124 "message": "Invalid MN 8bpxsy+p6=U+JK5}>)\"lEomEK70\\J$>-5X AFeVB!" 00:18:07.124 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:07.124 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:07.124 [2024-12-06 14:11:55.760085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:07.408 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:07.668 [2024-12-06 14:11:56.146773] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:07.668 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:07.668 { 00:18:07.668 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:07.668 "listen_address": { 00:18:07.668 "trtype": "tcp", 00:18:07.668 "traddr": "", 00:18:07.668 "trsvcid": "4421" 00:18:07.668 }, 00:18:07.668 "method": "nvmf_subsystem_remove_listener", 00:18:07.668 "req_id": 1 00:18:07.668 } 00:18:07.668 Got JSON-RPC error response 00:18:07.668 response: 00:18:07.668 { 00:18:07.668 "code": -32602, 00:18:07.668 "message": "Invalid parameters" 00:18:07.668 }' 00:18:07.668 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:07.668 { 00:18:07.668 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:07.668 "listen_address": { 00:18:07.668 "trtype": "tcp", 00:18:07.668 "traddr": "", 00:18:07.668 "trsvcid": "4421" 00:18:07.668 }, 00:18:07.668 "method": "nvmf_subsystem_remove_listener", 00:18:07.668 "req_id": 1 00:18:07.668 } 00:18:07.668 Got JSON-RPC error response 00:18:07.668 response: 00:18:07.668 { 00:18:07.668 "code": -32602, 00:18:07.668 "message": "Invalid parameters" 00:18:07.668 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:07.668 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8398 -i 0 00:18:07.929 [2024-12-06 14:11:56.335320] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8398: invalid cntlid range [0-65519] 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:07.929 { 00:18:07.929 "nqn": "nqn.2016-06.io.spdk:cnode8398", 00:18:07.929 "min_cntlid": 0, 00:18:07.929 "method": "nvmf_create_subsystem", 00:18:07.929 "req_id": 1 00:18:07.929 } 00:18:07.929 Got JSON-RPC error response 00:18:07.929 response: 00:18:07.929 { 00:18:07.929 "code": -32602, 00:18:07.929 "message": "Invalid cntlid range [0-65519]" 00:18:07.929 }' 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:07.929 { 00:18:07.929 "nqn": "nqn.2016-06.io.spdk:cnode8398", 00:18:07.929 "min_cntlid": 0, 00:18:07.929 "method": "nvmf_create_subsystem", 00:18:07.929 "req_id": 1 00:18:07.929 } 00:18:07.929 Got JSON-RPC error response 00:18:07.929 response: 00:18:07.929 { 00:18:07.929 "code": -32602, 00:18:07.929 "message": "Invalid cntlid range [0-65519]" 00:18:07.929 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24764 -i 65520 00:18:07.929 [2024-12-06 14:11:56.523949] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24764: invalid cntlid range [65520-65519] 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:07.929 { 00:18:07.929 "nqn": "nqn.2016-06.io.spdk:cnode24764", 00:18:07.929 "min_cntlid": 65520, 00:18:07.929 "method": "nvmf_create_subsystem", 00:18:07.929 "req_id": 1 00:18:07.929 } 00:18:07.929 Got JSON-RPC error response 00:18:07.929 response: 00:18:07.929 { 00:18:07.929 "code": -32602, 00:18:07.929 "message": "Invalid cntlid range [65520-65519]" 00:18:07.929 }' 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:07.929 { 00:18:07.929 "nqn": "nqn.2016-06.io.spdk:cnode24764", 00:18:07.929 "min_cntlid": 65520, 00:18:07.929 "method": "nvmf_create_subsystem", 00:18:07.929 "req_id": 1 00:18:07.929 } 00:18:07.929 Got JSON-RPC error response 00:18:07.929 response: 00:18:07.929 { 00:18:07.929 "code": -32602, 00:18:07.929 "message": "Invalid cntlid range [65520-65519]" 00:18:07.929 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:07.929 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19255 -I 0 00:18:08.190 [2024-12-06 14:11:56.708499] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19255: invalid cntlid range [1-0] 00:18:08.190 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:08.190 { 00:18:08.190 "nqn": "nqn.2016-06.io.spdk:cnode19255", 00:18:08.190 "max_cntlid": 0, 00:18:08.190 "method": "nvmf_create_subsystem", 00:18:08.190 "req_id": 1 00:18:08.190 } 00:18:08.190 Got JSON-RPC error response 00:18:08.190 response: 00:18:08.190 { 00:18:08.190 "code": -32602, 00:18:08.190 "message": "Invalid cntlid range [1-0]" 00:18:08.190 }' 00:18:08.190 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:08.190 { 00:18:08.190 "nqn": "nqn.2016-06.io.spdk:cnode19255", 00:18:08.190 "max_cntlid": 0, 00:18:08.190 "method": "nvmf_create_subsystem", 00:18:08.190 "req_id": 1 00:18:08.190 } 00:18:08.190 Got JSON-RPC error response 00:18:08.190 response: 00:18:08.190 { 00:18:08.190 "code": -32602, 00:18:08.190 "message": "Invalid cntlid range [1-0]" 00:18:08.190 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:08.190 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26148 -I 65520 00:18:08.451 [2024-12-06 14:11:56.893080] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26148: invalid cntlid range [1-65520] 00:18:08.451 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:08.451 { 00:18:08.451 "nqn": "nqn.2016-06.io.spdk:cnode26148", 00:18:08.451 "max_cntlid": 65520, 00:18:08.451 "method": "nvmf_create_subsystem", 00:18:08.451 "req_id": 1 00:18:08.451 } 00:18:08.451 Got JSON-RPC error response 00:18:08.451 response: 00:18:08.451 { 00:18:08.451 "code": -32602, 00:18:08.451 "message": "Invalid cntlid range [1-65520]" 00:18:08.451 }' 00:18:08.451 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:08.451 { 00:18:08.451 "nqn": "nqn.2016-06.io.spdk:cnode26148", 00:18:08.451 "max_cntlid": 65520, 00:18:08.451 "method": "nvmf_create_subsystem", 00:18:08.451 "req_id": 1 00:18:08.451 } 00:18:08.451 Got JSON-RPC error response 00:18:08.451 response: 00:18:08.451 { 00:18:08.451 "code": -32602, 00:18:08.451 "message": "Invalid cntlid range [1-65520]" 00:18:08.451 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:08.451 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26968 -i 6 -I 5 00:18:08.710 [2024-12-06 14:11:57.097743] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26968: invalid cntlid range [6-5] 00:18:08.710 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:08.710 { 00:18:08.710 "nqn": "nqn.2016-06.io.spdk:cnode26968", 00:18:08.710 "min_cntlid": 6, 00:18:08.710 "max_cntlid": 5, 00:18:08.710 "method": "nvmf_create_subsystem", 00:18:08.710 "req_id": 1 00:18:08.710 } 00:18:08.710 Got JSON-RPC error response 00:18:08.710 response: 00:18:08.710 { 00:18:08.710 "code": -32602, 00:18:08.710 "message": "Invalid cntlid range [6-5]" 00:18:08.710 }' 00:18:08.710 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:08.710 { 00:18:08.710 "nqn": "nqn.2016-06.io.spdk:cnode26968", 00:18:08.710 "min_cntlid": 6, 00:18:08.710 "max_cntlid": 5, 00:18:08.710 "method": "nvmf_create_subsystem", 00:18:08.710 "req_id": 1 00:18:08.710 } 00:18:08.710 Got JSON-RPC error response 00:18:08.710 response: 00:18:08.710 { 00:18:08.710 "code": -32602, 00:18:08.710 "message": "Invalid cntlid range [6-5]" 00:18:08.710 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:08.710 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:08.710 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:08.710 { 00:18:08.710 "name": "foobar", 00:18:08.710 "method": "nvmf_delete_target", 00:18:08.710 "req_id": 1 00:18:08.710 } 00:18:08.710 Got JSON-RPC error response 00:18:08.710 response: 00:18:08.710 { 00:18:08.710 "code": -32602, 00:18:08.710 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:08.710 }' 00:18:08.710 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:08.710 { 00:18:08.710 "name": "foobar", 00:18:08.710 "method": "nvmf_delete_target", 00:18:08.710 "req_id": 1 00:18:08.710 } 00:18:08.711 Got JSON-RPC error response 00:18:08.711 response: 00:18:08.711 { 00:18:08.711 "code": -32602, 00:18:08.711 "message": "The specified target doesn't exist, cannot delete it." 00:18:08.711 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.711 rmmod nvme_tcp 00:18:08.711 rmmod nvme_fabrics 00:18:08.711 rmmod nvme_keyring 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2752287 ']' 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2752287 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2752287 ']' 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2752287 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.711 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2752287 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2752287' 00:18:08.971 killing process with pid 2752287 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2752287 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2752287 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.971 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:11.512 00:18:11.512 real 0m14.154s 00:18:11.512 user 0m21.153s 00:18:11.512 sys 0m6.671s 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:11.512 ************************************ 00:18:11.512 END TEST nvmf_invalid 00:18:11.512 ************************************ 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.512 ************************************ 00:18:11.512 START TEST nvmf_connect_stress 00:18:11.512 ************************************ 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:11.512 * Looking for test storage... 00:18:11.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:11.512 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:11.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.513 --rc genhtml_branch_coverage=1 00:18:11.513 --rc genhtml_function_coverage=1 00:18:11.513 --rc genhtml_legend=1 00:18:11.513 --rc geninfo_all_blocks=1 00:18:11.513 --rc geninfo_unexecuted_blocks=1 00:18:11.513 00:18:11.513 ' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:11.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.513 --rc genhtml_branch_coverage=1 00:18:11.513 --rc genhtml_function_coverage=1 00:18:11.513 --rc genhtml_legend=1 00:18:11.513 --rc geninfo_all_blocks=1 00:18:11.513 --rc geninfo_unexecuted_blocks=1 00:18:11.513 00:18:11.513 ' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:11.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.513 --rc genhtml_branch_coverage=1 00:18:11.513 --rc genhtml_function_coverage=1 00:18:11.513 --rc genhtml_legend=1 00:18:11.513 --rc geninfo_all_blocks=1 00:18:11.513 --rc geninfo_unexecuted_blocks=1 00:18:11.513 00:18:11.513 ' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:11.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.513 --rc genhtml_branch_coverage=1 00:18:11.513 --rc genhtml_function_coverage=1 00:18:11.513 --rc genhtml_legend=1 00:18:11.513 --rc geninfo_all_blocks=1 00:18:11.513 --rc geninfo_unexecuted_blocks=1 00:18:11.513 00:18:11.513 ' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.513 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:19.649 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.649 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:19.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:19.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:19.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:18:19.650 00:18:19.650 --- 10.0.0.2 ping statistics --- 00:18:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.650 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:18:19.650 00:18:19.650 --- 10.0.0.1 ping statistics --- 00:18:19.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.650 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2757585 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2757585 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2757585 ']' 00:18:19.650 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.651 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.651 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.651 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.651 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.651 [2024-12-06 14:12:07.445098] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:19.651 [2024-12-06 14:12:07.445159] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.651 [2024-12-06 14:12:07.549643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.651 [2024-12-06 14:12:07.600757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.651 [2024-12-06 14:12:07.600807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.651 [2024-12-06 14:12:07.600816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.651 [2024-12-06 14:12:07.600823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.651 [2024-12-06 14:12:07.600829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.651 [2024-12-06 14:12:07.602934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.651 [2024-12-06 14:12:07.603098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.651 [2024-12-06 14:12:07.603098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.651 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.651 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:19.651 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.651 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.651 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 [2024-12-06 14:12:08.319993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 [2024-12-06 14:12:08.345633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.914 NULL1 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2757766 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.914 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.176 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.176 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:20.176 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.176 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.176 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.755 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.755 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:20.755 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.755 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.755 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.017 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.017 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:21.017 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.017 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.017 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.279 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.279 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:21.279 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.279 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.279 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.540 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.540 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:21.540 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.540 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.540 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.801 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.801 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:21.801 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.801 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.801 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.371 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.371 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:22.371 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.371 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.371 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.631 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.631 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:22.631 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.631 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.631 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.889 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.889 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:22.889 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.889 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.889 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.148 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.148 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:23.148 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.148 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.148 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.718 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.718 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:23.718 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.718 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.718 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.978 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.978 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:23.978 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.978 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.978 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.239 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.239 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:24.239 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.239 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.239 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.498 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.498 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:24.499 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.499 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.499 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.758 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.758 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:24.758 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.758 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.758 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.330 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.330 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:25.330 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.330 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.330 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.590 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.590 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:25.590 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.590 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.590 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.857 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.857 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:25.857 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.857 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.857 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.118 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.118 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:26.118 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.118 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.118 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.379 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.379 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:26.379 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.379 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.379 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.948 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.948 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:26.948 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.948 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.948 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.208 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.208 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:27.208 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.208 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.208 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.468 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.468 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:27.468 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.468 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.468 14:12:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.728 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.728 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:27.728 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.728 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.728 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.989 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.989 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:27.989 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.989 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.989 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.580 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.580 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:28.580 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.580 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.580 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.840 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.840 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:28.840 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.840 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.840 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:29.101 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.101 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.361 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.361 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:29.361 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.361 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.361 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.622 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.622 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:29.622 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.622 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.622 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.882 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2757766 00:18:30.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2757766) - No such process 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2757766 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:30.143 rmmod nvme_tcp 00:18:30.143 rmmod nvme_fabrics 00:18:30.143 rmmod nvme_keyring 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2757585 ']' 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2757585 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2757585 ']' 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2757585 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757585 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757585' 00:18:30.143 killing process with pid 2757585 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2757585 00:18:30.143 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2757585 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.403 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:32.315 00:18:32.315 real 0m21.256s 00:18:32.315 user 0m42.160s 00:18:32.315 sys 0m9.401s 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.315 ************************************ 00:18:32.315 END TEST nvmf_connect_stress 00:18:32.315 ************************************ 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.315 14:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.576 ************************************ 00:18:32.576 START TEST nvmf_fused_ordering 00:18:32.576 ************************************ 00:18:32.576 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:32.576 * Looking for test storage... 00:18:32.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.576 --rc genhtml_branch_coverage=1 00:18:32.576 --rc genhtml_function_coverage=1 00:18:32.576 --rc genhtml_legend=1 00:18:32.576 --rc geninfo_all_blocks=1 00:18:32.576 --rc geninfo_unexecuted_blocks=1 00:18:32.576 00:18:32.576 ' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.576 --rc genhtml_branch_coverage=1 00:18:32.576 --rc genhtml_function_coverage=1 00:18:32.576 --rc genhtml_legend=1 00:18:32.576 --rc geninfo_all_blocks=1 00:18:32.576 --rc geninfo_unexecuted_blocks=1 00:18:32.576 00:18:32.576 ' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.576 --rc genhtml_branch_coverage=1 00:18:32.576 --rc genhtml_function_coverage=1 00:18:32.576 --rc genhtml_legend=1 00:18:32.576 --rc geninfo_all_blocks=1 00:18:32.576 --rc geninfo_unexecuted_blocks=1 00:18:32.576 00:18:32.576 ' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:32.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.576 --rc genhtml_branch_coverage=1 00:18:32.576 --rc genhtml_function_coverage=1 00:18:32.576 --rc genhtml_legend=1 00:18:32.576 --rc geninfo_all_blocks=1 00:18:32.576 --rc geninfo_unexecuted_blocks=1 00:18:32.576 00:18:32.576 ' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.576 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:32.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:32.577 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:32.836 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:40.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:40.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:40.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:40.973 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:40.973 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:40.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:18:40.974 00:18:40.974 --- 10.0.0.2 ping statistics --- 00:18:40.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.974 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:18:40.974 00:18:40.974 --- 10.0.0.1 ping statistics --- 00:18:40.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.974 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2764428 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2764428 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2764428 ']' 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.974 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.974 [2024-12-06 14:12:28.708800] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:40.974 [2024-12-06 14:12:28.708869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.974 [2024-12-06 14:12:28.810871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.974 [2024-12-06 14:12:28.861111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.974 [2024-12-06 14:12:28.861162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.974 [2024-12-06 14:12:28.861171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.974 [2024-12-06 14:12:28.861178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.974 [2024-12-06 14:12:28.861184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.974 [2024-12-06 14:12:28.861948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.974 [2024-12-06 14:12:29.585440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.974 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 [2024-12-06 14:12:29.609693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 NULL1 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.235 14:12:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:41.235 [2024-12-06 14:12:29.678886] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:41.235 [2024-12-06 14:12:29.678934] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764728 ] 00:18:41.495 Attached to nqn.2016-06.io.spdk:cnode1 00:18:41.495 Namespace ID: 1 size: 1GB 00:18:41.495 fused_ordering(0) 00:18:41.495 fused_ordering(1) 00:18:41.495 fused_ordering(2) 00:18:41.495 fused_ordering(3) 00:18:41.495 fused_ordering(4) 00:18:41.495 fused_ordering(5) 00:18:41.495 fused_ordering(6) 00:18:41.495 fused_ordering(7) 00:18:41.495 fused_ordering(8) 00:18:41.495 fused_ordering(9) 00:18:41.495 fused_ordering(10) 00:18:41.495 fused_ordering(11) 00:18:41.496 fused_ordering(12) 00:18:41.496 fused_ordering(13) 00:18:41.496 fused_ordering(14) 00:18:41.496 fused_ordering(15) 00:18:41.496 fused_ordering(16) 00:18:41.496 fused_ordering(17) 00:18:41.496 fused_ordering(18) 00:18:41.496 fused_ordering(19) 00:18:41.496 fused_ordering(20) 00:18:41.496 fused_ordering(21) 00:18:41.496 fused_ordering(22) 00:18:41.496 fused_ordering(23) 00:18:41.496 fused_ordering(24) 00:18:41.496 fused_ordering(25) 00:18:41.496 fused_ordering(26) 00:18:41.496 fused_ordering(27) 00:18:41.496 fused_ordering(28) 00:18:41.496 fused_ordering(29) 00:18:41.496 fused_ordering(30) 00:18:41.496 fused_ordering(31) 00:18:41.496 fused_ordering(32) 00:18:41.496 fused_ordering(33) 00:18:41.496 fused_ordering(34) 00:18:41.496 fused_ordering(35) 00:18:41.496 fused_ordering(36) 00:18:41.496 fused_ordering(37) 00:18:41.496 fused_ordering(38) 00:18:41.496 fused_ordering(39) 00:18:41.496 fused_ordering(40) 00:18:41.496 fused_ordering(41) 00:18:41.496 fused_ordering(42) 00:18:41.496 fused_ordering(43) 00:18:41.496 fused_ordering(44) 00:18:41.496 fused_ordering(45) 00:18:41.496 fused_ordering(46) 00:18:41.496 fused_ordering(47) 00:18:41.496 fused_ordering(48) 00:18:41.496 fused_ordering(49) 00:18:41.496 fused_ordering(50) 00:18:41.496 fused_ordering(51) 00:18:41.496 fused_ordering(52) 00:18:41.496 fused_ordering(53) 00:18:41.496 fused_ordering(54) 00:18:41.496 fused_ordering(55) 00:18:41.496 fused_ordering(56) 00:18:41.496 fused_ordering(57) 00:18:41.496 fused_ordering(58) 00:18:41.496 fused_ordering(59) 00:18:41.496 fused_ordering(60) 00:18:41.496 fused_ordering(61) 00:18:41.496 fused_ordering(62) 00:18:41.496 fused_ordering(63) 00:18:41.496 fused_ordering(64) 00:18:41.496 fused_ordering(65) 00:18:41.496 fused_ordering(66) 00:18:41.496 fused_ordering(67) 00:18:41.496 fused_ordering(68) 00:18:41.496 fused_ordering(69) 00:18:41.496 fused_ordering(70) 00:18:41.496 fused_ordering(71) 00:18:41.496 fused_ordering(72) 00:18:41.496 fused_ordering(73) 00:18:41.496 fused_ordering(74) 00:18:41.496 fused_ordering(75) 00:18:41.496 fused_ordering(76) 00:18:41.496 fused_ordering(77) 00:18:41.496 fused_ordering(78) 00:18:41.496 fused_ordering(79) 00:18:41.496 fused_ordering(80) 00:18:41.496 fused_ordering(81) 00:18:41.496 fused_ordering(82) 00:18:41.496 fused_ordering(83) 00:18:41.496 fused_ordering(84) 00:18:41.496 fused_ordering(85) 00:18:41.496 fused_ordering(86) 00:18:41.496 fused_ordering(87) 00:18:41.496 fused_ordering(88) 00:18:41.496 fused_ordering(89) 00:18:41.496 fused_ordering(90) 00:18:41.496 fused_ordering(91) 00:18:41.496 fused_ordering(92) 00:18:41.496 fused_ordering(93) 00:18:41.496 fused_ordering(94) 00:18:41.496 fused_ordering(95) 00:18:41.496 fused_ordering(96) 00:18:41.496 fused_ordering(97) 00:18:41.496 fused_ordering(98) 00:18:41.496 fused_ordering(99) 00:18:41.496 fused_ordering(100) 00:18:41.496 fused_ordering(101) 00:18:41.496 fused_ordering(102) 00:18:41.496 fused_ordering(103) 00:18:41.496 fused_ordering(104) 00:18:41.496 fused_ordering(105) 00:18:41.496 fused_ordering(106) 00:18:41.496 fused_ordering(107) 00:18:41.496 fused_ordering(108) 00:18:41.496 fused_ordering(109) 00:18:41.496 fused_ordering(110) 00:18:41.496 fused_ordering(111) 00:18:41.496 fused_ordering(112) 00:18:41.496 fused_ordering(113) 00:18:41.496 fused_ordering(114) 00:18:41.496 fused_ordering(115) 00:18:41.496 fused_ordering(116) 00:18:41.496 fused_ordering(117) 00:18:41.496 fused_ordering(118) 00:18:41.496 fused_ordering(119) 00:18:41.496 fused_ordering(120) 00:18:41.496 fused_ordering(121) 00:18:41.496 fused_ordering(122) 00:18:41.496 fused_ordering(123) 00:18:41.496 fused_ordering(124) 00:18:41.496 fused_ordering(125) 00:18:41.496 fused_ordering(126) 00:18:41.496 fused_ordering(127) 00:18:41.496 fused_ordering(128) 00:18:41.496 fused_ordering(129) 00:18:41.496 fused_ordering(130) 00:18:41.496 fused_ordering(131) 00:18:41.496 fused_ordering(132) 00:18:41.496 fused_ordering(133) 00:18:41.496 fused_ordering(134) 00:18:41.496 fused_ordering(135) 00:18:41.496 fused_ordering(136) 00:18:41.496 fused_ordering(137) 00:18:41.496 fused_ordering(138) 00:18:41.496 fused_ordering(139) 00:18:41.496 fused_ordering(140) 00:18:41.496 fused_ordering(141) 00:18:41.496 fused_ordering(142) 00:18:41.496 fused_ordering(143) 00:18:41.496 fused_ordering(144) 00:18:41.496 fused_ordering(145) 00:18:41.496 fused_ordering(146) 00:18:41.496 fused_ordering(147) 00:18:41.496 fused_ordering(148) 00:18:41.496 fused_ordering(149) 00:18:41.496 fused_ordering(150) 00:18:41.496 fused_ordering(151) 00:18:41.496 fused_ordering(152) 00:18:41.496 fused_ordering(153) 00:18:41.496 fused_ordering(154) 00:18:41.496 fused_ordering(155) 00:18:41.496 fused_ordering(156) 00:18:41.496 fused_ordering(157) 00:18:41.496 fused_ordering(158) 00:18:41.496 fused_ordering(159) 00:18:41.496 fused_ordering(160) 00:18:41.496 fused_ordering(161) 00:18:41.496 fused_ordering(162) 00:18:41.496 fused_ordering(163) 00:18:41.496 fused_ordering(164) 00:18:41.496 fused_ordering(165) 00:18:41.496 fused_ordering(166) 00:18:41.496 fused_ordering(167) 00:18:41.496 fused_ordering(168) 00:18:41.496 fused_ordering(169) 00:18:41.496 fused_ordering(170) 00:18:41.496 fused_ordering(171) 00:18:41.496 fused_ordering(172) 00:18:41.496 fused_ordering(173) 00:18:41.496 fused_ordering(174) 00:18:41.496 fused_ordering(175) 00:18:41.496 fused_ordering(176) 00:18:41.496 fused_ordering(177) 00:18:41.496 fused_ordering(178) 00:18:41.496 fused_ordering(179) 00:18:41.496 fused_ordering(180) 00:18:41.496 fused_ordering(181) 00:18:41.496 fused_ordering(182) 00:18:41.496 fused_ordering(183) 00:18:41.496 fused_ordering(184) 00:18:41.496 fused_ordering(185) 00:18:41.496 fused_ordering(186) 00:18:41.496 fused_ordering(187) 00:18:41.496 fused_ordering(188) 00:18:41.496 fused_ordering(189) 00:18:41.496 fused_ordering(190) 00:18:41.496 fused_ordering(191) 00:18:41.496 fused_ordering(192) 00:18:41.496 fused_ordering(193) 00:18:41.496 fused_ordering(194) 00:18:41.496 fused_ordering(195) 00:18:41.496 fused_ordering(196) 00:18:41.496 fused_ordering(197) 00:18:41.496 fused_ordering(198) 00:18:41.496 fused_ordering(199) 00:18:41.496 fused_ordering(200) 00:18:41.496 fused_ordering(201) 00:18:41.496 fused_ordering(202) 00:18:41.496 fused_ordering(203) 00:18:41.496 fused_ordering(204) 00:18:41.496 fused_ordering(205) 00:18:42.067 fused_ordering(206) 00:18:42.067 fused_ordering(207) 00:18:42.067 fused_ordering(208) 00:18:42.067 fused_ordering(209) 00:18:42.067 fused_ordering(210) 00:18:42.067 fused_ordering(211) 00:18:42.067 fused_ordering(212) 00:18:42.067 fused_ordering(213) 00:18:42.067 fused_ordering(214) 00:18:42.067 fused_ordering(215) 00:18:42.067 fused_ordering(216) 00:18:42.067 fused_ordering(217) 00:18:42.067 fused_ordering(218) 00:18:42.067 fused_ordering(219) 00:18:42.067 fused_ordering(220) 00:18:42.067 fused_ordering(221) 00:18:42.067 fused_ordering(222) 00:18:42.067 fused_ordering(223) 00:18:42.067 fused_ordering(224) 00:18:42.068 fused_ordering(225) 00:18:42.068 fused_ordering(226) 00:18:42.068 fused_ordering(227) 00:18:42.068 fused_ordering(228) 00:18:42.068 fused_ordering(229) 00:18:42.068 fused_ordering(230) 00:18:42.068 fused_ordering(231) 00:18:42.068 fused_ordering(232) 00:18:42.068 fused_ordering(233) 00:18:42.068 fused_ordering(234) 00:18:42.068 fused_ordering(235) 00:18:42.068 fused_ordering(236) 00:18:42.068 fused_ordering(237) 00:18:42.068 fused_ordering(238) 00:18:42.068 fused_ordering(239) 00:18:42.068 fused_ordering(240) 00:18:42.068 fused_ordering(241) 00:18:42.068 fused_ordering(242) 00:18:42.068 fused_ordering(243) 00:18:42.068 fused_ordering(244) 00:18:42.068 fused_ordering(245) 00:18:42.068 fused_ordering(246) 00:18:42.068 fused_ordering(247) 00:18:42.068 fused_ordering(248) 00:18:42.068 fused_ordering(249) 00:18:42.068 fused_ordering(250) 00:18:42.068 fused_ordering(251) 00:18:42.068 fused_ordering(252) 00:18:42.068 fused_ordering(253) 00:18:42.068 fused_ordering(254) 00:18:42.068 fused_ordering(255) 00:18:42.068 fused_ordering(256) 00:18:42.068 fused_ordering(257) 00:18:42.068 fused_ordering(258) 00:18:42.068 fused_ordering(259) 00:18:42.068 fused_ordering(260) 00:18:42.068 fused_ordering(261) 00:18:42.068 fused_ordering(262) 00:18:42.068 fused_ordering(263) 00:18:42.068 fused_ordering(264) 00:18:42.068 fused_ordering(265) 00:18:42.068 fused_ordering(266) 00:18:42.068 fused_ordering(267) 00:18:42.068 fused_ordering(268) 00:18:42.068 fused_ordering(269) 00:18:42.068 fused_ordering(270) 00:18:42.068 fused_ordering(271) 00:18:42.068 fused_ordering(272) 00:18:42.068 fused_ordering(273) 00:18:42.068 fused_ordering(274) 00:18:42.068 fused_ordering(275) 00:18:42.068 fused_ordering(276) 00:18:42.068 fused_ordering(277) 00:18:42.068 fused_ordering(278) 00:18:42.068 fused_ordering(279) 00:18:42.068 fused_ordering(280) 00:18:42.068 fused_ordering(281) 00:18:42.068 fused_ordering(282) 00:18:42.068 fused_ordering(283) 00:18:42.068 fused_ordering(284) 00:18:42.068 fused_ordering(285) 00:18:42.068 fused_ordering(286) 00:18:42.068 fused_ordering(287) 00:18:42.068 fused_ordering(288) 00:18:42.068 fused_ordering(289) 00:18:42.068 fused_ordering(290) 00:18:42.068 fused_ordering(291) 00:18:42.068 fused_ordering(292) 00:18:42.068 fused_ordering(293) 00:18:42.068 fused_ordering(294) 00:18:42.068 fused_ordering(295) 00:18:42.068 fused_ordering(296) 00:18:42.068 fused_ordering(297) 00:18:42.068 fused_ordering(298) 00:18:42.068 fused_ordering(299) 00:18:42.068 fused_ordering(300) 00:18:42.068 fused_ordering(301) 00:18:42.068 fused_ordering(302) 00:18:42.068 fused_ordering(303) 00:18:42.068 fused_ordering(304) 00:18:42.068 fused_ordering(305) 00:18:42.068 fused_ordering(306) 00:18:42.068 fused_ordering(307) 00:18:42.068 fused_ordering(308) 00:18:42.068 fused_ordering(309) 00:18:42.068 fused_ordering(310) 00:18:42.068 fused_ordering(311) 00:18:42.068 fused_ordering(312) 00:18:42.068 fused_ordering(313) 00:18:42.068 fused_ordering(314) 00:18:42.068 fused_ordering(315) 00:18:42.068 fused_ordering(316) 00:18:42.068 fused_ordering(317) 00:18:42.068 fused_ordering(318) 00:18:42.068 fused_ordering(319) 00:18:42.068 fused_ordering(320) 00:18:42.068 fused_ordering(321) 00:18:42.068 fused_ordering(322) 00:18:42.068 fused_ordering(323) 00:18:42.068 fused_ordering(324) 00:18:42.068 fused_ordering(325) 00:18:42.068 fused_ordering(326) 00:18:42.068 fused_ordering(327) 00:18:42.068 fused_ordering(328) 00:18:42.068 fused_ordering(329) 00:18:42.068 fused_ordering(330) 00:18:42.068 fused_ordering(331) 00:18:42.068 fused_ordering(332) 00:18:42.068 fused_ordering(333) 00:18:42.068 fused_ordering(334) 00:18:42.068 fused_ordering(335) 00:18:42.068 fused_ordering(336) 00:18:42.068 fused_ordering(337) 00:18:42.068 fused_ordering(338) 00:18:42.068 fused_ordering(339) 00:18:42.068 fused_ordering(340) 00:18:42.068 fused_ordering(341) 00:18:42.068 fused_ordering(342) 00:18:42.068 fused_ordering(343) 00:18:42.068 fused_ordering(344) 00:18:42.068 fused_ordering(345) 00:18:42.068 fused_ordering(346) 00:18:42.068 fused_ordering(347) 00:18:42.068 fused_ordering(348) 00:18:42.068 fused_ordering(349) 00:18:42.068 fused_ordering(350) 00:18:42.068 fused_ordering(351) 00:18:42.068 fused_ordering(352) 00:18:42.068 fused_ordering(353) 00:18:42.068 fused_ordering(354) 00:18:42.068 fused_ordering(355) 00:18:42.068 fused_ordering(356) 00:18:42.068 fused_ordering(357) 00:18:42.068 fused_ordering(358) 00:18:42.068 fused_ordering(359) 00:18:42.068 fused_ordering(360) 00:18:42.068 fused_ordering(361) 00:18:42.068 fused_ordering(362) 00:18:42.068 fused_ordering(363) 00:18:42.068 fused_ordering(364) 00:18:42.068 fused_ordering(365) 00:18:42.068 fused_ordering(366) 00:18:42.068 fused_ordering(367) 00:18:42.068 fused_ordering(368) 00:18:42.068 fused_ordering(369) 00:18:42.068 fused_ordering(370) 00:18:42.068 fused_ordering(371) 00:18:42.068 fused_ordering(372) 00:18:42.068 fused_ordering(373) 00:18:42.068 fused_ordering(374) 00:18:42.068 fused_ordering(375) 00:18:42.068 fused_ordering(376) 00:18:42.068 fused_ordering(377) 00:18:42.068 fused_ordering(378) 00:18:42.068 fused_ordering(379) 00:18:42.068 fused_ordering(380) 00:18:42.068 fused_ordering(381) 00:18:42.068 fused_ordering(382) 00:18:42.068 fused_ordering(383) 00:18:42.068 fused_ordering(384) 00:18:42.068 fused_ordering(385) 00:18:42.068 fused_ordering(386) 00:18:42.068 fused_ordering(387) 00:18:42.068 fused_ordering(388) 00:18:42.068 fused_ordering(389) 00:18:42.068 fused_ordering(390) 00:18:42.068 fused_ordering(391) 00:18:42.068 fused_ordering(392) 00:18:42.068 fused_ordering(393) 00:18:42.068 fused_ordering(394) 00:18:42.068 fused_ordering(395) 00:18:42.068 fused_ordering(396) 00:18:42.068 fused_ordering(397) 00:18:42.068 fused_ordering(398) 00:18:42.068 fused_ordering(399) 00:18:42.068 fused_ordering(400) 00:18:42.068 fused_ordering(401) 00:18:42.068 fused_ordering(402) 00:18:42.068 fused_ordering(403) 00:18:42.068 fused_ordering(404) 00:18:42.068 fused_ordering(405) 00:18:42.068 fused_ordering(406) 00:18:42.068 fused_ordering(407) 00:18:42.068 fused_ordering(408) 00:18:42.068 fused_ordering(409) 00:18:42.068 fused_ordering(410) 00:18:42.329 fused_ordering(411) 00:18:42.329 fused_ordering(412) 00:18:42.329 fused_ordering(413) 00:18:42.329 fused_ordering(414) 00:18:42.329 fused_ordering(415) 00:18:42.329 fused_ordering(416) 00:18:42.329 fused_ordering(417) 00:18:42.329 fused_ordering(418) 00:18:42.329 fused_ordering(419) 00:18:42.329 fused_ordering(420) 00:18:42.329 fused_ordering(421) 00:18:42.329 fused_ordering(422) 00:18:42.329 fused_ordering(423) 00:18:42.329 fused_ordering(424) 00:18:42.329 fused_ordering(425) 00:18:42.329 fused_ordering(426) 00:18:42.329 fused_ordering(427) 00:18:42.329 fused_ordering(428) 00:18:42.329 fused_ordering(429) 00:18:42.329 fused_ordering(430) 00:18:42.329 fused_ordering(431) 00:18:42.329 fused_ordering(432) 00:18:42.329 fused_ordering(433) 00:18:42.329 fused_ordering(434) 00:18:42.329 fused_ordering(435) 00:18:42.329 fused_ordering(436) 00:18:42.329 fused_ordering(437) 00:18:42.329 fused_ordering(438) 00:18:42.329 fused_ordering(439) 00:18:42.329 fused_ordering(440) 00:18:42.329 fused_ordering(441) 00:18:42.329 fused_ordering(442) 00:18:42.329 fused_ordering(443) 00:18:42.329 fused_ordering(444) 00:18:42.329 fused_ordering(445) 00:18:42.329 fused_ordering(446) 00:18:42.329 fused_ordering(447) 00:18:42.329 fused_ordering(448) 00:18:42.329 fused_ordering(449) 00:18:42.329 fused_ordering(450) 00:18:42.329 fused_ordering(451) 00:18:42.329 fused_ordering(452) 00:18:42.329 fused_ordering(453) 00:18:42.329 fused_ordering(454) 00:18:42.329 fused_ordering(455) 00:18:42.329 fused_ordering(456) 00:18:42.329 fused_ordering(457) 00:18:42.329 fused_ordering(458) 00:18:42.329 fused_ordering(459) 00:18:42.329 fused_ordering(460) 00:18:42.329 fused_ordering(461) 00:18:42.329 fused_ordering(462) 00:18:42.329 fused_ordering(463) 00:18:42.329 fused_ordering(464) 00:18:42.329 fused_ordering(465) 00:18:42.329 fused_ordering(466) 00:18:42.329 fused_ordering(467) 00:18:42.329 fused_ordering(468) 00:18:42.329 fused_ordering(469) 00:18:42.329 fused_ordering(470) 00:18:42.330 fused_ordering(471) 00:18:42.330 fused_ordering(472) 00:18:42.330 fused_ordering(473) 00:18:42.330 fused_ordering(474) 00:18:42.330 fused_ordering(475) 00:18:42.330 fused_ordering(476) 00:18:42.330 fused_ordering(477) 00:18:42.330 fused_ordering(478) 00:18:42.330 fused_ordering(479) 00:18:42.330 fused_ordering(480) 00:18:42.330 fused_ordering(481) 00:18:42.330 fused_ordering(482) 00:18:42.330 fused_ordering(483) 00:18:42.330 fused_ordering(484) 00:18:42.330 fused_ordering(485) 00:18:42.330 fused_ordering(486) 00:18:42.330 fused_ordering(487) 00:18:42.330 fused_ordering(488) 00:18:42.330 fused_ordering(489) 00:18:42.330 fused_ordering(490) 00:18:42.330 fused_ordering(491) 00:18:42.330 fused_ordering(492) 00:18:42.330 fused_ordering(493) 00:18:42.330 fused_ordering(494) 00:18:42.330 fused_ordering(495) 00:18:42.330 fused_ordering(496) 00:18:42.330 fused_ordering(497) 00:18:42.330 fused_ordering(498) 00:18:42.330 fused_ordering(499) 00:18:42.330 fused_ordering(500) 00:18:42.330 fused_ordering(501) 00:18:42.330 fused_ordering(502) 00:18:42.330 fused_ordering(503) 00:18:42.330 fused_ordering(504) 00:18:42.330 fused_ordering(505) 00:18:42.330 fused_ordering(506) 00:18:42.330 fused_ordering(507) 00:18:42.330 fused_ordering(508) 00:18:42.330 fused_ordering(509) 00:18:42.330 fused_ordering(510) 00:18:42.330 fused_ordering(511) 00:18:42.330 fused_ordering(512) 00:18:42.330 fused_ordering(513) 00:18:42.330 fused_ordering(514) 00:18:42.330 fused_ordering(515) 00:18:42.330 fused_ordering(516) 00:18:42.330 fused_ordering(517) 00:18:42.330 fused_ordering(518) 00:18:42.330 fused_ordering(519) 00:18:42.330 fused_ordering(520) 00:18:42.330 fused_ordering(521) 00:18:42.330 fused_ordering(522) 00:18:42.330 fused_ordering(523) 00:18:42.330 fused_ordering(524) 00:18:42.330 fused_ordering(525) 00:18:42.330 fused_ordering(526) 00:18:42.330 fused_ordering(527) 00:18:42.330 fused_ordering(528) 00:18:42.330 fused_ordering(529) 00:18:42.330 fused_ordering(530) 00:18:42.330 fused_ordering(531) 00:18:42.330 fused_ordering(532) 00:18:42.330 fused_ordering(533) 00:18:42.330 fused_ordering(534) 00:18:42.330 fused_ordering(535) 00:18:42.330 fused_ordering(536) 00:18:42.330 fused_ordering(537) 00:18:42.330 fused_ordering(538) 00:18:42.330 fused_ordering(539) 00:18:42.330 fused_ordering(540) 00:18:42.330 fused_ordering(541) 00:18:42.330 fused_ordering(542) 00:18:42.330 fused_ordering(543) 00:18:42.330 fused_ordering(544) 00:18:42.330 fused_ordering(545) 00:18:42.330 fused_ordering(546) 00:18:42.330 fused_ordering(547) 00:18:42.330 fused_ordering(548) 00:18:42.330 fused_ordering(549) 00:18:42.330 fused_ordering(550) 00:18:42.330 fused_ordering(551) 00:18:42.330 fused_ordering(552) 00:18:42.330 fused_ordering(553) 00:18:42.330 fused_ordering(554) 00:18:42.330 fused_ordering(555) 00:18:42.330 fused_ordering(556) 00:18:42.330 fused_ordering(557) 00:18:42.330 fused_ordering(558) 00:18:42.330 fused_ordering(559) 00:18:42.330 fused_ordering(560) 00:18:42.330 fused_ordering(561) 00:18:42.330 fused_ordering(562) 00:18:42.330 fused_ordering(563) 00:18:42.330 fused_ordering(564) 00:18:42.330 fused_ordering(565) 00:18:42.330 fused_ordering(566) 00:18:42.330 fused_ordering(567) 00:18:42.330 fused_ordering(568) 00:18:42.330 fused_ordering(569) 00:18:42.330 fused_ordering(570) 00:18:42.330 fused_ordering(571) 00:18:42.330 fused_ordering(572) 00:18:42.330 fused_ordering(573) 00:18:42.330 fused_ordering(574) 00:18:42.330 fused_ordering(575) 00:18:42.330 fused_ordering(576) 00:18:42.330 fused_ordering(577) 00:18:42.330 fused_ordering(578) 00:18:42.330 fused_ordering(579) 00:18:42.330 fused_ordering(580) 00:18:42.330 fused_ordering(581) 00:18:42.330 fused_ordering(582) 00:18:42.330 fused_ordering(583) 00:18:42.330 fused_ordering(584) 00:18:42.330 fused_ordering(585) 00:18:42.330 fused_ordering(586) 00:18:42.330 fused_ordering(587) 00:18:42.330 fused_ordering(588) 00:18:42.330 fused_ordering(589) 00:18:42.330 fused_ordering(590) 00:18:42.330 fused_ordering(591) 00:18:42.330 fused_ordering(592) 00:18:42.330 fused_ordering(593) 00:18:42.330 fused_ordering(594) 00:18:42.330 fused_ordering(595) 00:18:42.330 fused_ordering(596) 00:18:42.330 fused_ordering(597) 00:18:42.330 fused_ordering(598) 00:18:42.330 fused_ordering(599) 00:18:42.330 fused_ordering(600) 00:18:42.330 fused_ordering(601) 00:18:42.330 fused_ordering(602) 00:18:42.330 fused_ordering(603) 00:18:42.330 fused_ordering(604) 00:18:42.330 fused_ordering(605) 00:18:42.330 fused_ordering(606) 00:18:42.330 fused_ordering(607) 00:18:42.330 fused_ordering(608) 00:18:42.330 fused_ordering(609) 00:18:42.330 fused_ordering(610) 00:18:42.330 fused_ordering(611) 00:18:42.330 fused_ordering(612) 00:18:42.330 fused_ordering(613) 00:18:42.330 fused_ordering(614) 00:18:42.330 fused_ordering(615) 00:18:42.903 fused_ordering(616) 00:18:42.903 fused_ordering(617) 00:18:42.903 fused_ordering(618) 00:18:42.903 fused_ordering(619) 00:18:42.903 fused_ordering(620) 00:18:42.903 fused_ordering(621) 00:18:42.903 fused_ordering(622) 00:18:42.903 fused_ordering(623) 00:18:42.903 fused_ordering(624) 00:18:42.903 fused_ordering(625) 00:18:42.903 fused_ordering(626) 00:18:42.903 fused_ordering(627) 00:18:42.903 fused_ordering(628) 00:18:42.903 fused_ordering(629) 00:18:42.903 fused_ordering(630) 00:18:42.903 fused_ordering(631) 00:18:42.903 fused_ordering(632) 00:18:42.903 fused_ordering(633) 00:18:42.903 fused_ordering(634) 00:18:42.903 fused_ordering(635) 00:18:42.903 fused_ordering(636) 00:18:42.903 fused_ordering(637) 00:18:42.903 fused_ordering(638) 00:18:42.903 fused_ordering(639) 00:18:42.903 fused_ordering(640) 00:18:42.903 fused_ordering(641) 00:18:42.903 fused_ordering(642) 00:18:42.903 fused_ordering(643) 00:18:42.903 fused_ordering(644) 00:18:42.903 fused_ordering(645) 00:18:42.903 fused_ordering(646) 00:18:42.903 fused_ordering(647) 00:18:42.903 fused_ordering(648) 00:18:42.903 fused_ordering(649) 00:18:42.903 fused_ordering(650) 00:18:42.903 fused_ordering(651) 00:18:42.903 fused_ordering(652) 00:18:42.903 fused_ordering(653) 00:18:42.903 fused_ordering(654) 00:18:42.903 fused_ordering(655) 00:18:42.903 fused_ordering(656) 00:18:42.903 fused_ordering(657) 00:18:42.903 fused_ordering(658) 00:18:42.903 fused_ordering(659) 00:18:42.903 fused_ordering(660) 00:18:42.903 fused_ordering(661) 00:18:42.903 fused_ordering(662) 00:18:42.903 fused_ordering(663) 00:18:42.903 fused_ordering(664) 00:18:42.903 fused_ordering(665) 00:18:42.903 fused_ordering(666) 00:18:42.903 fused_ordering(667) 00:18:42.903 fused_ordering(668) 00:18:42.903 fused_ordering(669) 00:18:42.903 fused_ordering(670) 00:18:42.903 fused_ordering(671) 00:18:42.903 fused_ordering(672) 00:18:42.903 fused_ordering(673) 00:18:42.903 fused_ordering(674) 00:18:42.903 fused_ordering(675) 00:18:42.903 fused_ordering(676) 00:18:42.903 fused_ordering(677) 00:18:42.903 fused_ordering(678) 00:18:42.903 fused_ordering(679) 00:18:42.903 fused_ordering(680) 00:18:42.903 fused_ordering(681) 00:18:42.903 fused_ordering(682) 00:18:42.903 fused_ordering(683) 00:18:42.903 fused_ordering(684) 00:18:42.903 fused_ordering(685) 00:18:42.903 fused_ordering(686) 00:18:42.903 fused_ordering(687) 00:18:42.903 fused_ordering(688) 00:18:42.903 fused_ordering(689) 00:18:42.903 fused_ordering(690) 00:18:42.903 fused_ordering(691) 00:18:42.903 fused_ordering(692) 00:18:42.903 fused_ordering(693) 00:18:42.903 fused_ordering(694) 00:18:42.903 fused_ordering(695) 00:18:42.903 fused_ordering(696) 00:18:42.903 fused_ordering(697) 00:18:42.903 fused_ordering(698) 00:18:42.903 fused_ordering(699) 00:18:42.903 fused_ordering(700) 00:18:42.903 fused_ordering(701) 00:18:42.903 fused_ordering(702) 00:18:42.903 fused_ordering(703) 00:18:42.903 fused_ordering(704) 00:18:42.903 fused_ordering(705) 00:18:42.903 fused_ordering(706) 00:18:42.903 fused_ordering(707) 00:18:42.903 fused_ordering(708) 00:18:42.903 fused_ordering(709) 00:18:42.903 fused_ordering(710) 00:18:42.903 fused_ordering(711) 00:18:42.903 fused_ordering(712) 00:18:42.903 fused_ordering(713) 00:18:42.903 fused_ordering(714) 00:18:42.903 fused_ordering(715) 00:18:42.903 fused_ordering(716) 00:18:42.903 fused_ordering(717) 00:18:42.903 fused_ordering(718) 00:18:42.903 fused_ordering(719) 00:18:42.903 fused_ordering(720) 00:18:42.903 fused_ordering(721) 00:18:42.903 fused_ordering(722) 00:18:42.903 fused_ordering(723) 00:18:42.903 fused_ordering(724) 00:18:42.903 fused_ordering(725) 00:18:42.903 fused_ordering(726) 00:18:42.903 fused_ordering(727) 00:18:42.903 fused_ordering(728) 00:18:42.903 fused_ordering(729) 00:18:42.903 fused_ordering(730) 00:18:42.903 fused_ordering(731) 00:18:42.903 fused_ordering(732) 00:18:42.903 fused_ordering(733) 00:18:42.903 fused_ordering(734) 00:18:42.903 fused_ordering(735) 00:18:42.903 fused_ordering(736) 00:18:42.903 fused_ordering(737) 00:18:42.903 fused_ordering(738) 00:18:42.903 fused_ordering(739) 00:18:42.903 fused_ordering(740) 00:18:42.903 fused_ordering(741) 00:18:42.903 fused_ordering(742) 00:18:42.903 fused_ordering(743) 00:18:42.903 fused_ordering(744) 00:18:42.903 fused_ordering(745) 00:18:42.903 fused_ordering(746) 00:18:42.903 fused_ordering(747) 00:18:42.903 fused_ordering(748) 00:18:42.903 fused_ordering(749) 00:18:42.903 fused_ordering(750) 00:18:42.903 fused_ordering(751) 00:18:42.903 fused_ordering(752) 00:18:42.903 fused_ordering(753) 00:18:42.903 fused_ordering(754) 00:18:42.903 fused_ordering(755) 00:18:42.903 fused_ordering(756) 00:18:42.903 fused_ordering(757) 00:18:42.903 fused_ordering(758) 00:18:42.903 fused_ordering(759) 00:18:42.903 fused_ordering(760) 00:18:42.903 fused_ordering(761) 00:18:42.903 fused_ordering(762) 00:18:42.903 fused_ordering(763) 00:18:42.903 fused_ordering(764) 00:18:42.903 fused_ordering(765) 00:18:42.903 fused_ordering(766) 00:18:42.903 fused_ordering(767) 00:18:42.903 fused_ordering(768) 00:18:42.903 fused_ordering(769) 00:18:42.903 fused_ordering(770) 00:18:42.903 fused_ordering(771) 00:18:42.903 fused_ordering(772) 00:18:42.903 fused_ordering(773) 00:18:42.903 fused_ordering(774) 00:18:42.903 fused_ordering(775) 00:18:42.903 fused_ordering(776) 00:18:42.903 fused_ordering(777) 00:18:42.903 fused_ordering(778) 00:18:42.903 fused_ordering(779) 00:18:42.903 fused_ordering(780) 00:18:42.903 fused_ordering(781) 00:18:42.903 fused_ordering(782) 00:18:42.903 fused_ordering(783) 00:18:42.903 fused_ordering(784) 00:18:42.903 fused_ordering(785) 00:18:42.903 fused_ordering(786) 00:18:42.903 fused_ordering(787) 00:18:42.903 fused_ordering(788) 00:18:42.903 fused_ordering(789) 00:18:42.903 fused_ordering(790) 00:18:42.903 fused_ordering(791) 00:18:42.903 fused_ordering(792) 00:18:42.903 fused_ordering(793) 00:18:42.903 fused_ordering(794) 00:18:42.903 fused_ordering(795) 00:18:42.903 fused_ordering(796) 00:18:42.903 fused_ordering(797) 00:18:42.903 fused_ordering(798) 00:18:42.903 fused_ordering(799) 00:18:42.903 fused_ordering(800) 00:18:42.903 fused_ordering(801) 00:18:42.903 fused_ordering(802) 00:18:42.903 fused_ordering(803) 00:18:42.903 fused_ordering(804) 00:18:42.903 fused_ordering(805) 00:18:42.903 fused_ordering(806) 00:18:42.903 fused_ordering(807) 00:18:42.903 fused_ordering(808) 00:18:42.903 fused_ordering(809) 00:18:42.903 fused_ordering(810) 00:18:42.903 fused_ordering(811) 00:18:42.903 fused_ordering(812) 00:18:42.903 fused_ordering(813) 00:18:42.903 fused_ordering(814) 00:18:42.903 fused_ordering(815) 00:18:42.903 fused_ordering(816) 00:18:42.903 fused_ordering(817) 00:18:42.903 fused_ordering(818) 00:18:42.903 fused_ordering(819) 00:18:42.903 fused_ordering(820) 00:18:43.475 fused_ordering(821) 00:18:43.475 fused_ordering(822) 00:18:43.475 fused_ordering(823) 00:18:43.475 fused_ordering(824) 00:18:43.476 fused_ordering(825) 00:18:43.476 fused_ordering(826) 00:18:43.476 fused_ordering(827) 00:18:43.476 fused_ordering(828) 00:18:43.476 fused_ordering(829) 00:18:43.476 fused_ordering(830) 00:18:43.476 fused_ordering(831) 00:18:43.476 fused_ordering(832) 00:18:43.476 fused_ordering(833) 00:18:43.476 fused_ordering(834) 00:18:43.476 fused_ordering(835) 00:18:43.476 fused_ordering(836) 00:18:43.476 fused_ordering(837) 00:18:43.476 fused_ordering(838) 00:18:43.476 fused_ordering(839) 00:18:43.476 fused_ordering(840) 00:18:43.476 fused_ordering(841) 00:18:43.476 fused_ordering(842) 00:18:43.476 fused_ordering(843) 00:18:43.476 fused_ordering(844) 00:18:43.476 fused_ordering(845) 00:18:43.476 fused_ordering(846) 00:18:43.476 fused_ordering(847) 00:18:43.476 fused_ordering(848) 00:18:43.476 fused_ordering(849) 00:18:43.476 fused_ordering(850) 00:18:43.476 fused_ordering(851) 00:18:43.476 fused_ordering(852) 00:18:43.476 fused_ordering(853) 00:18:43.476 fused_ordering(854) 00:18:43.476 fused_ordering(855) 00:18:43.476 fused_ordering(856) 00:18:43.476 fused_ordering(857) 00:18:43.476 fused_ordering(858) 00:18:43.476 fused_ordering(859) 00:18:43.476 fused_ordering(860) 00:18:43.476 fused_ordering(861) 00:18:43.476 fused_ordering(862) 00:18:43.476 fused_ordering(863) 00:18:43.476 fused_ordering(864) 00:18:43.476 fused_ordering(865) 00:18:43.476 fused_ordering(866) 00:18:43.476 fused_ordering(867) 00:18:43.476 fused_ordering(868) 00:18:43.476 fused_ordering(869) 00:18:43.476 fused_ordering(870) 00:18:43.476 fused_ordering(871) 00:18:43.476 fused_ordering(872) 00:18:43.476 fused_ordering(873) 00:18:43.476 fused_ordering(874) 00:18:43.476 fused_ordering(875) 00:18:43.476 fused_ordering(876) 00:18:43.476 fused_ordering(877) 00:18:43.476 fused_ordering(878) 00:18:43.476 fused_ordering(879) 00:18:43.476 fused_ordering(880) 00:18:43.476 fused_ordering(881) 00:18:43.476 fused_ordering(882) 00:18:43.476 fused_ordering(883) 00:18:43.476 fused_ordering(884) 00:18:43.476 fused_ordering(885) 00:18:43.476 fused_ordering(886) 00:18:43.476 fused_ordering(887) 00:18:43.476 fused_ordering(888) 00:18:43.476 fused_ordering(889) 00:18:43.476 fused_ordering(890) 00:18:43.476 fused_ordering(891) 00:18:43.476 fused_ordering(892) 00:18:43.476 fused_ordering(893) 00:18:43.476 fused_ordering(894) 00:18:43.476 fused_ordering(895) 00:18:43.476 fused_ordering(896) 00:18:43.476 fused_ordering(897) 00:18:43.476 fused_ordering(898) 00:18:43.476 fused_ordering(899) 00:18:43.476 fused_ordering(900) 00:18:43.476 fused_ordering(901) 00:18:43.476 fused_ordering(902) 00:18:43.476 fused_ordering(903) 00:18:43.476 fused_ordering(904) 00:18:43.476 fused_ordering(905) 00:18:43.476 fused_ordering(906) 00:18:43.476 fused_ordering(907) 00:18:43.476 fused_ordering(908) 00:18:43.476 fused_ordering(909) 00:18:43.476 fused_ordering(910) 00:18:43.476 fused_ordering(911) 00:18:43.476 fused_ordering(912) 00:18:43.476 fused_ordering(913) 00:18:43.476 fused_ordering(914) 00:18:43.476 fused_ordering(915) 00:18:43.476 fused_ordering(916) 00:18:43.476 fused_ordering(917) 00:18:43.476 fused_ordering(918) 00:18:43.476 fused_ordering(919) 00:18:43.476 fused_ordering(920) 00:18:43.476 fused_ordering(921) 00:18:43.476 fused_ordering(922) 00:18:43.476 fused_ordering(923) 00:18:43.476 fused_ordering(924) 00:18:43.476 fused_ordering(925) 00:18:43.476 fused_ordering(926) 00:18:43.476 fused_ordering(927) 00:18:43.476 fused_ordering(928) 00:18:43.476 fused_ordering(929) 00:18:43.476 fused_ordering(930) 00:18:43.476 fused_ordering(931) 00:18:43.476 fused_ordering(932) 00:18:43.476 fused_ordering(933) 00:18:43.476 fused_ordering(934) 00:18:43.476 fused_ordering(935) 00:18:43.476 fused_ordering(936) 00:18:43.476 fused_ordering(937) 00:18:43.476 fused_ordering(938) 00:18:43.476 fused_ordering(939) 00:18:43.476 fused_ordering(940) 00:18:43.476 fused_ordering(941) 00:18:43.476 fused_ordering(942) 00:18:43.476 fused_ordering(943) 00:18:43.476 fused_ordering(944) 00:18:43.476 fused_ordering(945) 00:18:43.476 fused_ordering(946) 00:18:43.476 fused_ordering(947) 00:18:43.476 fused_ordering(948) 00:18:43.476 fused_ordering(949) 00:18:43.476 fused_ordering(950) 00:18:43.476 fused_ordering(951) 00:18:43.476 fused_ordering(952) 00:18:43.476 fused_ordering(953) 00:18:43.476 fused_ordering(954) 00:18:43.476 fused_ordering(955) 00:18:43.476 fused_ordering(956) 00:18:43.476 fused_ordering(957) 00:18:43.476 fused_ordering(958) 00:18:43.476 fused_ordering(959) 00:18:43.476 fused_ordering(960) 00:18:43.476 fused_ordering(961) 00:18:43.476 fused_ordering(962) 00:18:43.476 fused_ordering(963) 00:18:43.476 fused_ordering(964) 00:18:43.476 fused_ordering(965) 00:18:43.476 fused_ordering(966) 00:18:43.476 fused_ordering(967) 00:18:43.476 fused_ordering(968) 00:18:43.476 fused_ordering(969) 00:18:43.476 fused_ordering(970) 00:18:43.476 fused_ordering(971) 00:18:43.476 fused_ordering(972) 00:18:43.476 fused_ordering(973) 00:18:43.476 fused_ordering(974) 00:18:43.476 fused_ordering(975) 00:18:43.476 fused_ordering(976) 00:18:43.476 fused_ordering(977) 00:18:43.476 fused_ordering(978) 00:18:43.476 fused_ordering(979) 00:18:43.476 fused_ordering(980) 00:18:43.476 fused_ordering(981) 00:18:43.476 fused_ordering(982) 00:18:43.476 fused_ordering(983) 00:18:43.476 fused_ordering(984) 00:18:43.476 fused_ordering(985) 00:18:43.476 fused_ordering(986) 00:18:43.476 fused_ordering(987) 00:18:43.476 fused_ordering(988) 00:18:43.476 fused_ordering(989) 00:18:43.476 fused_ordering(990) 00:18:43.476 fused_ordering(991) 00:18:43.476 fused_ordering(992) 00:18:43.476 fused_ordering(993) 00:18:43.476 fused_ordering(994) 00:18:43.476 fused_ordering(995) 00:18:43.476 fused_ordering(996) 00:18:43.476 fused_ordering(997) 00:18:43.476 fused_ordering(998) 00:18:43.476 fused_ordering(999) 00:18:43.476 fused_ordering(1000) 00:18:43.476 fused_ordering(1001) 00:18:43.476 fused_ordering(1002) 00:18:43.476 fused_ordering(1003) 00:18:43.476 fused_ordering(1004) 00:18:43.476 fused_ordering(1005) 00:18:43.476 fused_ordering(1006) 00:18:43.476 fused_ordering(1007) 00:18:43.476 fused_ordering(1008) 00:18:43.476 fused_ordering(1009) 00:18:43.476 fused_ordering(1010) 00:18:43.476 fused_ordering(1011) 00:18:43.476 fused_ordering(1012) 00:18:43.476 fused_ordering(1013) 00:18:43.476 fused_ordering(1014) 00:18:43.476 fused_ordering(1015) 00:18:43.476 fused_ordering(1016) 00:18:43.476 fused_ordering(1017) 00:18:43.476 fused_ordering(1018) 00:18:43.476 fused_ordering(1019) 00:18:43.476 fused_ordering(1020) 00:18:43.476 fused_ordering(1021) 00:18:43.476 fused_ordering(1022) 00:18:43.476 fused_ordering(1023) 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.476 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.476 rmmod nvme_tcp 00:18:43.476 rmmod nvme_fabrics 00:18:43.476 rmmod nvme_keyring 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2764428 ']' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2764428 ']' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2764428' 00:18:43.737 killing process with pid 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2764428 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.737 14:12:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.284 00:18:46.284 real 0m13.419s 00:18:46.284 user 0m7.093s 00:18:46.284 sys 0m7.214s 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:46.284 ************************************ 00:18:46.284 END TEST nvmf_fused_ordering 00:18:46.284 ************************************ 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.284 ************************************ 00:18:46.284 START TEST nvmf_ns_masking 00:18:46.284 ************************************ 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:46.284 * Looking for test storage... 00:18:46.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.284 --rc genhtml_branch_coverage=1 00:18:46.284 --rc genhtml_function_coverage=1 00:18:46.284 --rc genhtml_legend=1 00:18:46.284 --rc geninfo_all_blocks=1 00:18:46.284 --rc geninfo_unexecuted_blocks=1 00:18:46.284 00:18:46.284 ' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.284 --rc genhtml_branch_coverage=1 00:18:46.284 --rc genhtml_function_coverage=1 00:18:46.284 --rc genhtml_legend=1 00:18:46.284 --rc geninfo_all_blocks=1 00:18:46.284 --rc geninfo_unexecuted_blocks=1 00:18:46.284 00:18:46.284 ' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.284 --rc genhtml_branch_coverage=1 00:18:46.284 --rc genhtml_function_coverage=1 00:18:46.284 --rc genhtml_legend=1 00:18:46.284 --rc geninfo_all_blocks=1 00:18:46.284 --rc geninfo_unexecuted_blocks=1 00:18:46.284 00:18:46.284 ' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.284 --rc genhtml_branch_coverage=1 00:18:46.284 --rc genhtml_function_coverage=1 00:18:46.284 --rc genhtml_legend=1 00:18:46.284 --rc geninfo_all_blocks=1 00:18:46.284 --rc geninfo_unexecuted_blocks=1 00:18:46.284 00:18:46.284 ' 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.284 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e69e14d7-f08f-4735-a7c7-19e91a3ee9ba 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=15faab64-abe7-4a1b-ba62-458b478b1008 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=df9213f8-ac9f-4a21-bf43-ba46ac647ca5 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.285 14:12:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:54.424 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:54.424 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:54.424 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:54.424 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:54.424 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.425 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:18:54.425 00:18:54.425 --- 10.0.0.2 ping statistics --- 00:18:54.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.425 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:18:54.425 00:18:54.425 --- 10.0.0.1 ping statistics --- 00:18:54.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.425 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2769431 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2769431 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2769431 ']' 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.425 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.425 [2024-12-06 14:12:42.324108] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:18:54.425 [2024-12-06 14:12:42.324172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.425 [2024-12-06 14:12:42.421695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.425 [2024-12-06 14:12:42.472346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.425 [2024-12-06 14:12:42.472396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.425 [2024-12-06 14:12:42.472405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.425 [2024-12-06 14:12:42.472413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.425 [2024-12-06 14:12:42.472419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.425 [2024-12-06 14:12:42.473161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.686 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.946 [2024-12-06 14:12:43.344950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.946 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:54.946 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:54.946 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:54.946 Malloc1 00:18:54.946 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:55.206 Malloc2 00:18:55.206 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:55.466 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:55.726 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.726 [2024-12-06 14:12:44.309348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.726 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:55.726 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I df9213f8-ac9f-4a21-bf43-ba46ac647ca5 -a 10.0.0.2 -s 4420 -i 4 00:18:55.985 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:55.985 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:55.985 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.985 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:55.985 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:57.912 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.171 [ 0]:0x1 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.171 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=50c707aeca3743c1b97837b5fceed91d 00:18:58.172 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 50c707aeca3743c1b97837b5fceed91d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.172 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.431 [ 0]:0x1 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=50c707aeca3743c1b97837b5fceed91d 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 50c707aeca3743c1b97837b5fceed91d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:58.431 [ 1]:0x2 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:58.431 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.432 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:18:58.432 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.432 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:58.432 14:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.692 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:58.953 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:58.953 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:58.953 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I df9213f8-ac9f-4a21-bf43-ba46ac647ca5 -a 10.0.0.2 -s 4420 -i 4 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:59.214 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:01.267 [ 0]:0x2 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.267 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:01.529 [ 0]:0x1 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=50c707aeca3743c1b97837b5fceed91d 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 50c707aeca3743c1b97837b5fceed91d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:01.529 [ 1]:0x2 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:01.529 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:01.790 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:02.050 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:02.050 [ 0]:0x2 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.051 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:02.311 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:02.311 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I df9213f8-ac9f-4a21-bf43-ba46ac647ca5 -a 10.0.0.2 -s 4420 -i 4 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:02.571 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:04.488 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.488 [ 0]:0x1 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=50c707aeca3743c1b97837b5fceed91d 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 50c707aeca3743c1b97837b5fceed91d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.488 [ 1]:0x2 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.488 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:04.748 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:04.748 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:04.748 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.749 [ 0]:0x2 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.749 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:05.009 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.009 [2024-12-06 14:12:53.587127] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:05.009 request: 00:19:05.009 { 00:19:05.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.009 "nsid": 2, 00:19:05.009 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.009 "method": "nvmf_ns_remove_host", 00:19:05.009 "req_id": 1 00:19:05.009 } 00:19:05.009 Got JSON-RPC error response 00:19:05.009 response: 00:19:05.009 { 00:19:05.009 "code": -32602, 00:19:05.009 "message": "Invalid parameters" 00:19:05.010 } 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:05.010 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.268 [ 0]:0x2 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=673cf4d0195b4a939d945c5c97d4f9a8 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 673cf4d0195b4a939d945c5c97d4f9a8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2771646 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2771646 /var/tmp/host.sock 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2771646 ']' 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:05.268 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.269 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:05.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:05.269 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.269 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:05.269 [2024-12-06 14:12:53.858104] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:05.269 [2024-12-06 14:12:53.858155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771646 ] 00:19:05.528 [2024-12-06 14:12:53.946012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.528 [2024-12-06 14:12:53.981668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.098 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.098 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:06.098 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:06.358 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:06.358 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e69e14d7-f08f-4735-a7c7-19e91a3ee9ba 00:19:06.358 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:06.358 14:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E69E14D7F08F4735A7C719E91A3EE9BA -i 00:19:06.620 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 15faab64-abe7-4a1b-ba62-458b478b1008 00:19:06.620 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:06.620 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 15FAAB64ABE74A1BBA62458B478B1008 -i 00:19:06.881 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:06.881 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:07.141 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:07.141 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:07.711 nvme0n1 00:19:07.711 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:07.711 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:07.972 nvme1n2 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:07.972 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:08.232 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e69e14d7-f08f-4735-a7c7-19e91a3ee9ba == \e\6\9\e\1\4\d\7\-\f\0\8\f\-\4\7\3\5\-\a\7\c\7\-\1\9\e\9\1\a\3\e\e\9\b\a ]] 00:19:08.233 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:08.233 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:08.233 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:08.493 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 15faab64-abe7-4a1b-ba62-458b478b1008 == \1\5\f\a\a\b\6\4\-\a\b\e\7\-\4\a\1\b\-\b\a\6\2\-\4\5\8\b\4\7\8\b\1\0\0\8 ]] 00:19:08.493 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.493 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e69e14d7-f08f-4735-a7c7-19e91a3ee9ba 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E69E14D7F08F4735A7C719E91A3EE9BA 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E69E14D7F08F4735A7C719E91A3EE9BA 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:08.753 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E69E14D7F08F4735A7C719E91A3EE9BA 00:19:09.013 [2024-12-06 14:12:57.469383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:09.013 [2024-12-06 14:12:57.469410] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:09.013 [2024-12-06 14:12:57.469417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.013 request: 00:19:09.013 { 00:19:09.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.013 "namespace": { 00:19:09.013 "bdev_name": "invalid", 00:19:09.013 "nsid": 1, 00:19:09.013 "nguid": "E69E14D7F08F4735A7C719E91A3EE9BA", 00:19:09.013 "no_auto_visible": false, 00:19:09.013 "hide_metadata": false 00:19:09.013 }, 00:19:09.013 "method": "nvmf_subsystem_add_ns", 00:19:09.013 "req_id": 1 00:19:09.013 } 00:19:09.013 Got JSON-RPC error response 00:19:09.013 response: 00:19:09.013 { 00:19:09.013 "code": -32602, 00:19:09.013 "message": "Invalid parameters" 00:19:09.013 } 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e69e14d7-f08f-4735-a7c7-19e91a3ee9ba 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:09.013 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E69E14D7F08F4735A7C719E91A3EE9BA -i 00:19:09.274 14:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:11.182 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:11.182 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:11.182 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2771646 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2771646 ']' 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2771646 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771646 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771646' 00:19:11.443 killing process with pid 2771646 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2771646 00:19:11.443 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2771646 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.703 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.703 rmmod nvme_tcp 00:19:11.963 rmmod nvme_fabrics 00:19:11.963 rmmod nvme_keyring 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2769431 ']' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2769431 ']' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2769431' 00:19:11.963 killing process with pid 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2769431 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.963 14:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.509 00:19:14.509 real 0m28.191s 00:19:14.509 user 0m32.037s 00:19:14.509 sys 0m8.198s 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:14.509 ************************************ 00:19:14.509 END TEST nvmf_ns_masking 00:19:14.509 ************************************ 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.509 ************************************ 00:19:14.509 START TEST nvmf_nvme_cli 00:19:14.509 ************************************ 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:14.509 * Looking for test storage... 00:19:14.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.509 --rc genhtml_branch_coverage=1 00:19:14.509 --rc genhtml_function_coverage=1 00:19:14.509 --rc genhtml_legend=1 00:19:14.509 --rc geninfo_all_blocks=1 00:19:14.509 --rc geninfo_unexecuted_blocks=1 00:19:14.509 00:19:14.509 ' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.509 --rc genhtml_branch_coverage=1 00:19:14.509 --rc genhtml_function_coverage=1 00:19:14.509 --rc genhtml_legend=1 00:19:14.509 --rc geninfo_all_blocks=1 00:19:14.509 --rc geninfo_unexecuted_blocks=1 00:19:14.509 00:19:14.509 ' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.509 --rc genhtml_branch_coverage=1 00:19:14.509 --rc genhtml_function_coverage=1 00:19:14.509 --rc genhtml_legend=1 00:19:14.509 --rc geninfo_all_blocks=1 00:19:14.509 --rc geninfo_unexecuted_blocks=1 00:19:14.509 00:19:14.509 ' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.509 --rc genhtml_branch_coverage=1 00:19:14.509 --rc genhtml_function_coverage=1 00:19:14.509 --rc genhtml_legend=1 00:19:14.509 --rc geninfo_all_blocks=1 00:19:14.509 --rc geninfo_unexecuted_blocks=1 00:19:14.509 00:19:14.509 ' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.509 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.510 14:13:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.510 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:14.510 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:14.510 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.510 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.653 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:22.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:22.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:22.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:22.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:22.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:19:22.654 00:19:22.654 --- 10.0.0.2 ping statistics --- 00:19:22.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.654 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:19:22.654 00:19:22.654 --- 10.0.0.1 ping statistics --- 00:19:22.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.654 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2777356 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2777356 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2777356 ']' 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.654 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.654 [2024-12-06 14:13:10.591172] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:22.655 [2024-12-06 14:13:10.591238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.655 [2024-12-06 14:13:10.692564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.655 [2024-12-06 14:13:10.749672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.655 [2024-12-06 14:13:10.749730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.655 [2024-12-06 14:13:10.749739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.655 [2024-12-06 14:13:10.749746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.655 [2024-12-06 14:13:10.749752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.655 [2024-12-06 14:13:10.751788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.655 [2024-12-06 14:13:10.751928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.655 [2024-12-06 14:13:10.752092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.655 [2024-12-06 14:13:10.752093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 [2024-12-06 14:13:11.473540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 Malloc0 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 Malloc1 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:22.916 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.178 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:23.178 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.178 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 [2024-12-06 14:13:11.584656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:19:23.179 00:19:23.179 Discovery Log Number of Records 2, Generation counter 2 00:19:23.179 =====Discovery Log Entry 0====== 00:19:23.179 trtype: tcp 00:19:23.179 adrfam: ipv4 00:19:23.179 subtype: current discovery subsystem 00:19:23.179 treq: not required 00:19:23.179 portid: 0 00:19:23.179 trsvcid: 4420 00:19:23.179 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:23.179 traddr: 10.0.0.2 00:19:23.179 eflags: explicit discovery connections, duplicate discovery information 00:19:23.179 sectype: none 00:19:23.179 =====Discovery Log Entry 1====== 00:19:23.179 trtype: tcp 00:19:23.179 adrfam: ipv4 00:19:23.179 subtype: nvme subsystem 00:19:23.179 treq: not required 00:19:23.179 portid: 0 00:19:23.179 trsvcid: 4420 00:19:23.179 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:23.179 traddr: 10.0.0.2 00:19:23.179 eflags: none 00:19:23.179 sectype: none 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:23.179 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:25.103 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:27.012 /dev/nvme0n2 ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:27.012 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:27.013 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.273 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.534 rmmod nvme_tcp 00:19:27.534 rmmod nvme_fabrics 00:19:27.534 rmmod nvme_keyring 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2777356 ']' 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2777356 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2777356 ']' 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2777356 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.534 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777356 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777356' 00:19:27.534 killing process with pid 2777356 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2777356 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2777356 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.534 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.794 14:13:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:29.709 00:19:29.709 real 0m15.497s 00:19:29.709 user 0m23.852s 00:19:29.709 sys 0m6.382s 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:29.709 ************************************ 00:19:29.709 END TEST nvmf_nvme_cli 00:19:29.709 ************************************ 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:29.709 ************************************ 00:19:29.709 START TEST nvmf_vfio_user 00:19:29.709 ************************************ 00:19:29.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:29.972 * Looking for test storage... 00:19:29.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.972 --rc genhtml_branch_coverage=1 00:19:29.972 --rc genhtml_function_coverage=1 00:19:29.972 --rc genhtml_legend=1 00:19:29.972 --rc geninfo_all_blocks=1 00:19:29.972 --rc geninfo_unexecuted_blocks=1 00:19:29.972 00:19:29.972 ' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.972 --rc genhtml_branch_coverage=1 00:19:29.972 --rc genhtml_function_coverage=1 00:19:29.972 --rc genhtml_legend=1 00:19:29.972 --rc geninfo_all_blocks=1 00:19:29.972 --rc geninfo_unexecuted_blocks=1 00:19:29.972 00:19:29.972 ' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.972 --rc genhtml_branch_coverage=1 00:19:29.972 --rc genhtml_function_coverage=1 00:19:29.972 --rc genhtml_legend=1 00:19:29.972 --rc geninfo_all_blocks=1 00:19:29.972 --rc geninfo_unexecuted_blocks=1 00:19:29.972 00:19:29.972 ' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:29.972 --rc genhtml_branch_coverage=1 00:19:29.972 --rc genhtml_function_coverage=1 00:19:29.972 --rc genhtml_legend=1 00:19:29.972 --rc geninfo_all_blocks=1 00:19:29.972 --rc geninfo_unexecuted_blocks=1 00:19:29.972 00:19:29.972 ' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.972 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:29.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2778863 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2778863' 00:19:29.973 Process pid: 2778863 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2778863 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2778863 ']' 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.973 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:30.234 [2024-12-06 14:13:18.638387] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:30.234 [2024-12-06 14:13:18.638463] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.234 [2024-12-06 14:13:18.729094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.234 [2024-12-06 14:13:18.769822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.234 [2024-12-06 14:13:18.769860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.234 [2024-12-06 14:13:18.769866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.234 [2024-12-06 14:13:18.769872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.234 [2024-12-06 14:13:18.769876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.234 [2024-12-06 14:13:18.771322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.234 [2024-12-06 14:13:18.771523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.234 [2024-12-06 14:13:18.771587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.234 [2024-12-06 14:13:18.771587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.806 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.806 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:30.806 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:32.194 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:32.194 Malloc1 00:19:32.456 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:32.456 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:32.716 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:32.977 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:32.977 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:32.977 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:32.977 Malloc2 00:19:32.977 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:33.238 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:33.500 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:33.764 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:33.764 [2024-12-06 14:13:22.162973] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:33.764 [2024-12-06 14:13:22.162998] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779673 ] 00:19:33.764 [2024-12-06 14:13:22.197769] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:33.764 [2024-12-06 14:13:22.203028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:33.764 [2024-12-06 14:13:22.203046] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f707bcb3000 00:19:33.764 [2024-12-06 14:13:22.204031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.205025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.206036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.207045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.208054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.209050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.210058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.211064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:33.764 [2024-12-06 14:13:22.212069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:33.764 [2024-12-06 14:13:22.212076] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f707bca8000 00:19:33.764 [2024-12-06 14:13:22.212989] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:33.764 [2024-12-06 14:13:22.226449] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:33.764 [2024-12-06 14:13:22.226474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:33.764 [2024-12-06 14:13:22.229177] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:33.764 [2024-12-06 14:13:22.229212] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:33.764 [2024-12-06 14:13:22.229277] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:33.764 [2024-12-06 14:13:22.229287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:33.764 [2024-12-06 14:13:22.229291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:33.764 [2024-12-06 14:13:22.230181] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:33.764 [2024-12-06 14:13:22.230188] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:33.764 [2024-12-06 14:13:22.230194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:33.764 [2024-12-06 14:13:22.231190] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:33.764 [2024-12-06 14:13:22.231197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:33.764 [2024-12-06 14:13:22.231202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:33.764 [2024-12-06 14:13:22.232192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:33.764 [2024-12-06 14:13:22.232200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:33.764 [2024-12-06 14:13:22.233195] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:33.764 [2024-12-06 14:13:22.233202] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:33.764 [2024-12-06 14:13:22.233206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:33.764 [2024-12-06 14:13:22.233210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:33.764 [2024-12-06 14:13:22.233316] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:33.764 [2024-12-06 14:13:22.233320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:33.764 [2024-12-06 14:13:22.233323] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:33.764 [2024-12-06 14:13:22.234202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:33.764 [2024-12-06 14:13:22.235207] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:33.765 [2024-12-06 14:13:22.236210] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:33.765 [2024-12-06 14:13:22.237211] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:33.765 [2024-12-06 14:13:22.237280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:33.765 [2024-12-06 14:13:22.238224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:33.765 [2024-12-06 14:13:22.238230] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:33.765 [2024-12-06 14:13:22.238234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:33.765 [2024-12-06 14:13:22.238260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238273] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:33.765 [2024-12-06 14:13:22.238276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:33.765 [2024-12-06 14:13:22.238281] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.765 [2024-12-06 14:13:22.238291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238332] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:33.765 [2024-12-06 14:13:22.238337] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:33.765 [2024-12-06 14:13:22.238340] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:33.765 [2024-12-06 14:13:22.238343] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:33.765 [2024-12-06 14:13:22.238347] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:33.765 [2024-12-06 14:13:22.238350] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:33.765 [2024-12-06 14:13:22.238354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.765 [2024-12-06 14:13:22.238392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.765 [2024-12-06 14:13:22.238398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.765 [2024-12-06 14:13:22.238403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.765 [2024-12-06 14:13:22.238407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238432] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:33.765 [2024-12-06 14:13:22.238435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:33.765 [2024-12-06 14:13:22.238524] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:33.765 [2024-12-06 14:13:22.238527] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.765 [2024-12-06 14:13:22.238531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238547] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:33.765 [2024-12-06 14:13:22.238559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238570] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:33.765 [2024-12-06 14:13:22.238573] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:33.765 [2024-12-06 14:13:22.238575] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.765 [2024-12-06 14:13:22.238579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238614] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:33.765 [2024-12-06 14:13:22.238617] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:33.765 [2024-12-06 14:13:22.238619] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.765 [2024-12-06 14:13:22.238624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238667] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:33.765 [2024-12-06 14:13:22.238670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:33.765 [2024-12-06 14:13:22.238674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:33.765 [2024-12-06 14:13:22.238688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:33.765 [2024-12-06 14:13:22.238746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:33.765 [2024-12-06 14:13:22.238756] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:33.765 [2024-12-06 14:13:22.238759] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:33.765 [2024-12-06 14:13:22.238761] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:33.766 [2024-12-06 14:13:22.238764] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:33.766 [2024-12-06 14:13:22.238766] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:33.766 [2024-12-06 14:13:22.238771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:33.766 [2024-12-06 14:13:22.238776] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:33.766 [2024-12-06 14:13:22.238779] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:33.766 [2024-12-06 14:13:22.238782] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.766 [2024-12-06 14:13:22.238786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:33.766 [2024-12-06 14:13:22.238791] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:33.766 [2024-12-06 14:13:22.238794] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:33.766 [2024-12-06 14:13:22.238797] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.766 [2024-12-06 14:13:22.238802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:33.766 [2024-12-06 14:13:22.238808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:33.766 [2024-12-06 14:13:22.238811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:33.766 [2024-12-06 14:13:22.238813] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:33.766 [2024-12-06 14:13:22.238817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:33.766 [2024-12-06 14:13:22.238822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:33.766 [2024-12-06 14:13:22.238831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:33.766 [2024-12-06 14:13:22.238838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:33.766 [2024-12-06 14:13:22.238843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:33.766 ===================================================== 00:19:33.766 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:33.766 ===================================================== 00:19:33.766 Controller Capabilities/Features 00:19:33.766 ================================ 00:19:33.766 Vendor ID: 4e58 00:19:33.766 Subsystem Vendor ID: 4e58 00:19:33.766 Serial Number: SPDK1 00:19:33.766 Model Number: SPDK bdev Controller 00:19:33.766 Firmware Version: 25.01 00:19:33.766 Recommended Arb Burst: 6 00:19:33.766 IEEE OUI Identifier: 8d 6b 50 00:19:33.766 Multi-path I/O 00:19:33.766 May have multiple subsystem ports: Yes 00:19:33.766 May have multiple controllers: Yes 00:19:33.766 Associated with SR-IOV VF: No 00:19:33.766 Max Data Transfer Size: 131072 00:19:33.766 Max Number of Namespaces: 32 00:19:33.766 Max Number of I/O Queues: 127 00:19:33.766 NVMe Specification Version (VS): 1.3 00:19:33.766 NVMe Specification Version (Identify): 1.3 00:19:33.766 Maximum Queue Entries: 256 00:19:33.766 Contiguous Queues Required: Yes 00:19:33.766 Arbitration Mechanisms Supported 00:19:33.766 Weighted Round Robin: Not Supported 00:19:33.766 Vendor Specific: Not Supported 00:19:33.766 Reset Timeout: 15000 ms 00:19:33.766 Doorbell Stride: 4 bytes 00:19:33.766 NVM Subsystem Reset: Not Supported 00:19:33.766 Command Sets Supported 00:19:33.766 NVM Command Set: Supported 00:19:33.766 Boot Partition: Not Supported 00:19:33.766 Memory Page Size Minimum: 4096 bytes 00:19:33.766 Memory Page Size Maximum: 4096 bytes 00:19:33.766 Persistent Memory Region: Not Supported 00:19:33.766 Optional Asynchronous Events Supported 00:19:33.766 Namespace Attribute Notices: Supported 00:19:33.766 Firmware Activation Notices: Not Supported 00:19:33.766 ANA Change Notices: Not Supported 00:19:33.766 PLE Aggregate Log Change Notices: Not Supported 00:19:33.766 LBA Status Info Alert Notices: Not Supported 00:19:33.766 EGE Aggregate Log Change Notices: Not Supported 00:19:33.766 Normal NVM Subsystem Shutdown event: Not Supported 00:19:33.766 Zone Descriptor Change Notices: Not Supported 00:19:33.766 Discovery Log Change Notices: Not Supported 00:19:33.766 Controller Attributes 00:19:33.766 128-bit Host Identifier: Supported 00:19:33.766 Non-Operational Permissive Mode: Not Supported 00:19:33.766 NVM Sets: Not Supported 00:19:33.766 Read Recovery Levels: Not Supported 00:19:33.766 Endurance Groups: Not Supported 00:19:33.766 Predictable Latency Mode: Not Supported 00:19:33.766 Traffic Based Keep ALive: Not Supported 00:19:33.766 Namespace Granularity: Not Supported 00:19:33.766 SQ Associations: Not Supported 00:19:33.766 UUID List: Not Supported 00:19:33.766 Multi-Domain Subsystem: Not Supported 00:19:33.766 Fixed Capacity Management: Not Supported 00:19:33.766 Variable Capacity Management: Not Supported 00:19:33.766 Delete Endurance Group: Not Supported 00:19:33.766 Delete NVM Set: Not Supported 00:19:33.766 Extended LBA Formats Supported: Not Supported 00:19:33.766 Flexible Data Placement Supported: Not Supported 00:19:33.766 00:19:33.766 Controller Memory Buffer Support 00:19:33.766 ================================ 00:19:33.766 Supported: No 00:19:33.766 00:19:33.766 Persistent Memory Region Support 00:19:33.766 ================================ 00:19:33.766 Supported: No 00:19:33.766 00:19:33.766 Admin Command Set Attributes 00:19:33.766 ============================ 00:19:33.766 Security Send/Receive: Not Supported 00:19:33.766 Format NVM: Not Supported 00:19:33.766 Firmware Activate/Download: Not Supported 00:19:33.766 Namespace Management: Not Supported 00:19:33.766 Device Self-Test: Not Supported 00:19:33.766 Directives: Not Supported 00:19:33.766 NVMe-MI: Not Supported 00:19:33.766 Virtualization Management: Not Supported 00:19:33.766 Doorbell Buffer Config: Not Supported 00:19:33.766 Get LBA Status Capability: Not Supported 00:19:33.766 Command & Feature Lockdown Capability: Not Supported 00:19:33.766 Abort Command Limit: 4 00:19:33.766 Async Event Request Limit: 4 00:19:33.766 Number of Firmware Slots: N/A 00:19:33.766 Firmware Slot 1 Read-Only: N/A 00:19:33.766 Firmware Activation Without Reset: N/A 00:19:33.766 Multiple Update Detection Support: N/A 00:19:33.766 Firmware Update Granularity: No Information Provided 00:19:33.766 Per-Namespace SMART Log: No 00:19:33.766 Asymmetric Namespace Access Log Page: Not Supported 00:19:33.766 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:33.766 Command Effects Log Page: Supported 00:19:33.766 Get Log Page Extended Data: Supported 00:19:33.766 Telemetry Log Pages: Not Supported 00:19:33.766 Persistent Event Log Pages: Not Supported 00:19:33.766 Supported Log Pages Log Page: May Support 00:19:33.766 Commands Supported & Effects Log Page: Not Supported 00:19:33.766 Feature Identifiers & Effects Log Page:May Support 00:19:33.766 NVMe-MI Commands & Effects Log Page: May Support 00:19:33.766 Data Area 4 for Telemetry Log: Not Supported 00:19:33.766 Error Log Page Entries Supported: 128 00:19:33.766 Keep Alive: Supported 00:19:33.766 Keep Alive Granularity: 10000 ms 00:19:33.766 00:19:33.766 NVM Command Set Attributes 00:19:33.766 ========================== 00:19:33.766 Submission Queue Entry Size 00:19:33.766 Max: 64 00:19:33.766 Min: 64 00:19:33.766 Completion Queue Entry Size 00:19:33.766 Max: 16 00:19:33.766 Min: 16 00:19:33.766 Number of Namespaces: 32 00:19:33.766 Compare Command: Supported 00:19:33.766 Write Uncorrectable Command: Not Supported 00:19:33.766 Dataset Management Command: Supported 00:19:33.766 Write Zeroes Command: Supported 00:19:33.766 Set Features Save Field: Not Supported 00:19:33.766 Reservations: Not Supported 00:19:33.766 Timestamp: Not Supported 00:19:33.766 Copy: Supported 00:19:33.766 Volatile Write Cache: Present 00:19:33.766 Atomic Write Unit (Normal): 1 00:19:33.766 Atomic Write Unit (PFail): 1 00:19:33.766 Atomic Compare & Write Unit: 1 00:19:33.766 Fused Compare & Write: Supported 00:19:33.766 Scatter-Gather List 00:19:33.766 SGL Command Set: Supported (Dword aligned) 00:19:33.766 SGL Keyed: Not Supported 00:19:33.766 SGL Bit Bucket Descriptor: Not Supported 00:19:33.766 SGL Metadata Pointer: Not Supported 00:19:33.766 Oversized SGL: Not Supported 00:19:33.766 SGL Metadata Address: Not Supported 00:19:33.766 SGL Offset: Not Supported 00:19:33.766 Transport SGL Data Block: Not Supported 00:19:33.766 Replay Protected Memory Block: Not Supported 00:19:33.766 00:19:33.766 Firmware Slot Information 00:19:33.766 ========================= 00:19:33.766 Active slot: 1 00:19:33.766 Slot 1 Firmware Revision: 25.01 00:19:33.766 00:19:33.766 00:19:33.766 Commands Supported and Effects 00:19:33.766 ============================== 00:19:33.766 Admin Commands 00:19:33.766 -------------- 00:19:33.766 Get Log Page (02h): Supported 00:19:33.766 Identify (06h): Supported 00:19:33.766 Abort (08h): Supported 00:19:33.766 Set Features (09h): Supported 00:19:33.767 Get Features (0Ah): Supported 00:19:33.767 Asynchronous Event Request (0Ch): Supported 00:19:33.767 Keep Alive (18h): Supported 00:19:33.767 I/O Commands 00:19:33.767 ------------ 00:19:33.767 Flush (00h): Supported LBA-Change 00:19:33.767 Write (01h): Supported LBA-Change 00:19:33.767 Read (02h): Supported 00:19:33.767 Compare (05h): Supported 00:19:33.767 Write Zeroes (08h): Supported LBA-Change 00:19:33.767 Dataset Management (09h): Supported LBA-Change 00:19:33.767 Copy (19h): Supported LBA-Change 00:19:33.767 00:19:33.767 Error Log 00:19:33.767 ========= 00:19:33.767 00:19:33.767 Arbitration 00:19:33.767 =========== 00:19:33.767 Arbitration Burst: 1 00:19:33.767 00:19:33.767 Power Management 00:19:33.767 ================ 00:19:33.767 Number of Power States: 1 00:19:33.767 Current Power State: Power State #0 00:19:33.767 Power State #0: 00:19:33.767 Max Power: 0.00 W 00:19:33.767 Non-Operational State: Operational 00:19:33.767 Entry Latency: Not Reported 00:19:33.767 Exit Latency: Not Reported 00:19:33.767 Relative Read Throughput: 0 00:19:33.767 Relative Read Latency: 0 00:19:33.767 Relative Write Throughput: 0 00:19:33.767 Relative Write Latency: 0 00:19:33.767 Idle Power: Not Reported 00:19:33.767 Active Power: Not Reported 00:19:33.767 Non-Operational Permissive Mode: Not Supported 00:19:33.767 00:19:33.767 Health Information 00:19:33.767 ================== 00:19:33.767 Critical Warnings: 00:19:33.767 Available Spare Space: OK 00:19:33.767 Temperature: OK 00:19:33.767 Device Reliability: OK 00:19:33.767 Read Only: No 00:19:33.767 Volatile Memory Backup: OK 00:19:33.767 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:33.767 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:33.767 Available Spare: 0% 00:19:33.767 Available Sp[2024-12-06 14:13:22.238913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:33.767 [2024-12-06 14:13:22.238919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:33.767 [2024-12-06 14:13:22.238939] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:33.767 [2024-12-06 14:13:22.238946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.767 [2024-12-06 14:13:22.238951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.767 [2024-12-06 14:13:22.238955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.767 [2024-12-06 14:13:22.238960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.767 [2024-12-06 14:13:22.241460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:33.767 [2024-12-06 14:13:22.241468] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:33.767 [2024-12-06 14:13:22.242250] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:33.767 [2024-12-06 14:13:22.242289] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:33.767 [2024-12-06 14:13:22.242294] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:33.767 [2024-12-06 14:13:22.243256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:33.767 [2024-12-06 14:13:22.243265] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:33.767 [2024-12-06 14:13:22.243319] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:33.767 [2024-12-06 14:13:22.244276] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:33.767 are Threshold: 0% 00:19:33.767 Life Percentage Used: 0% 00:19:33.767 Data Units Read: 0 00:19:33.767 Data Units Written: 0 00:19:33.767 Host Read Commands: 0 00:19:33.767 Host Write Commands: 0 00:19:33.767 Controller Busy Time: 0 minutes 00:19:33.767 Power Cycles: 0 00:19:33.767 Power On Hours: 0 hours 00:19:33.767 Unsafe Shutdowns: 0 00:19:33.767 Unrecoverable Media Errors: 0 00:19:33.767 Lifetime Error Log Entries: 0 00:19:33.767 Warning Temperature Time: 0 minutes 00:19:33.767 Critical Temperature Time: 0 minutes 00:19:33.767 00:19:33.767 Number of Queues 00:19:33.767 ================ 00:19:33.767 Number of I/O Submission Queues: 127 00:19:33.767 Number of I/O Completion Queues: 127 00:19:33.767 00:19:33.767 Active Namespaces 00:19:33.767 ================= 00:19:33.767 Namespace ID:1 00:19:33.767 Error Recovery Timeout: Unlimited 00:19:33.767 Command Set Identifier: NVM (00h) 00:19:33.767 Deallocate: Supported 00:19:33.767 Deallocated/Unwritten Error: Not Supported 00:19:33.767 Deallocated Read Value: Unknown 00:19:33.767 Deallocate in Write Zeroes: Not Supported 00:19:33.767 Deallocated Guard Field: 0xFFFF 00:19:33.767 Flush: Supported 00:19:33.767 Reservation: Supported 00:19:33.767 Namespace Sharing Capabilities: Multiple Controllers 00:19:33.767 Size (in LBAs): 131072 (0GiB) 00:19:33.767 Capacity (in LBAs): 131072 (0GiB) 00:19:33.767 Utilization (in LBAs): 131072 (0GiB) 00:19:33.767 NGUID: C2F50CAF9F7A4E848854C44141A89713 00:19:33.767 UUID: c2f50caf-9f7a-4e84-8854-c44141a89713 00:19:33.767 Thin Provisioning: Not Supported 00:19:33.767 Per-NS Atomic Units: Yes 00:19:33.767 Atomic Boundary Size (Normal): 0 00:19:33.767 Atomic Boundary Size (PFail): 0 00:19:33.767 Atomic Boundary Offset: 0 00:19:33.767 Maximum Single Source Range Length: 65535 00:19:33.767 Maximum Copy Length: 65535 00:19:33.767 Maximum Source Range Count: 1 00:19:33.767 NGUID/EUI64 Never Reused: No 00:19:33.767 Namespace Write Protected: No 00:19:33.767 Number of LBA Formats: 1 00:19:33.767 Current LBA Format: LBA Format #00 00:19:33.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:33.767 00:19:33.767 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:34.035 [2024-12-06 14:13:22.434164] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:39.329 Initializing NVMe Controllers 00:19:39.329 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:39.329 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:39.329 Initialization complete. Launching workers. 00:19:39.329 ======================================================== 00:19:39.329 Latency(us) 00:19:39.329 Device Information : IOPS MiB/s Average min max 00:19:39.330 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39967.03 156.12 3202.52 867.84 6861.59 00:19:39.330 ======================================================== 00:19:39.330 Total : 39967.03 156.12 3202.52 867.84 6861.59 00:19:39.330 00:19:39.330 [2024-12-06 14:13:27.454323] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:39.330 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:39.330 [2024-12-06 14:13:27.643160] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:44.635 Initializing NVMe Controllers 00:19:44.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:44.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:44.635 Initialization complete. Launching workers. 00:19:44.635 ======================================================== 00:19:44.635 Latency(us) 00:19:44.635 Device Information : IOPS MiB/s Average min max 00:19:44.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16047.61 62.69 7975.76 4988.11 9975.35 00:19:44.635 ======================================================== 00:19:44.635 Total : 16047.61 62.69 7975.76 4988.11 9975.35 00:19:44.635 00:19:44.635 [2024-12-06 14:13:32.673507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:44.635 14:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:44.635 [2024-12-06 14:13:32.883429] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:49.927 [2024-12-06 14:13:37.960639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:49.927 Initializing NVMe Controllers 00:19:49.927 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:49.927 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:49.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:49.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:49.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:49.927 Initialization complete. Launching workers. 00:19:49.927 Starting thread on core 2 00:19:49.927 Starting thread on core 3 00:19:49.927 Starting thread on core 1 00:19:49.927 14:13:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:49.927 [2024-12-06 14:13:38.219823] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:53.233 [2024-12-06 14:13:41.289718] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:53.233 Initializing NVMe Controllers 00:19:53.233 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:53.233 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:53.233 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:53.233 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:53.233 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:53.233 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:53.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:53.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:53.233 Initialization complete. Launching workers. 00:19:53.233 Starting thread on core 1 with urgent priority queue 00:19:53.233 Starting thread on core 2 with urgent priority queue 00:19:53.233 Starting thread on core 3 with urgent priority queue 00:19:53.233 Starting thread on core 0 with urgent priority queue 00:19:53.233 SPDK bdev Controller (SPDK1 ) core 0: 12375.33 IO/s 8.08 secs/100000 ios 00:19:53.233 SPDK bdev Controller (SPDK1 ) core 1: 8227.33 IO/s 12.15 secs/100000 ios 00:19:53.233 SPDK bdev Controller (SPDK1 ) core 2: 9427.33 IO/s 10.61 secs/100000 ios 00:19:53.233 SPDK bdev Controller (SPDK1 ) core 3: 8171.67 IO/s 12.24 secs/100000 ios 00:19:53.233 ======================================================== 00:19:53.233 00:19:53.233 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:53.233 [2024-12-06 14:13:41.528861] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:53.233 Initializing NVMe Controllers 00:19:53.233 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:53.233 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:53.233 Namespace ID: 1 size: 0GB 00:19:53.233 Initialization complete. 00:19:53.233 INFO: using host memory buffer for IO 00:19:53.233 Hello world! 00:19:53.233 [2024-12-06 14:13:41.563097] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:53.233 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:53.233 [2024-12-06 14:13:41.796879] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.175 Initializing NVMe Controllers 00:19:54.175 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.175 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.175 Initialization complete. Launching workers. 00:19:54.175 submit (in ns) avg, min, max = 7591.4, 2850.0, 3999012.5 00:19:54.175 complete (in ns) avg, min, max = 17649.9, 1625.8, 4032625.0 00:19:54.175 00:19:54.175 Submit histogram 00:19:54.175 ================ 00:19:54.175 Range in us Cumulative Count 00:19:54.175 2.840 - 2.853: 0.0706% ( 14) 00:19:54.175 2.853 - 2.867: 0.6857% ( 122) 00:19:54.175 2.867 - 2.880: 2.3697% ( 334) 00:19:54.175 2.880 - 2.893: 7.1241% ( 943) 00:19:54.175 2.893 - 2.907: 12.2366% ( 1014) 00:19:54.175 2.907 - 2.920: 18.0498% ( 1153) 00:19:54.175 2.920 - 2.933: 23.2984% ( 1041) 00:19:54.175 2.933 - 2.947: 28.5822% ( 1048) 00:19:54.175 2.947 - 2.960: 34.6425% ( 1202) 00:19:54.175 2.960 - 2.973: 40.2239% ( 1107) 00:19:54.175 2.973 - 2.987: 46.0018% ( 1146) 00:19:54.175 2.987 - 3.000: 52.3747% ( 1264) 00:19:54.175 3.000 - 3.013: 60.2198% ( 1556) 00:19:54.175 3.013 - 3.027: 69.3557% ( 1812) 00:19:54.175 3.027 - 3.040: 77.9873% ( 1712) 00:19:54.175 3.040 - 3.053: 85.3786% ( 1466) 00:19:54.175 3.053 - 3.067: 91.0759% ( 1130) 00:19:54.175 3.067 - 3.080: 94.9934% ( 777) 00:19:54.175 3.080 - 3.093: 97.1312% ( 424) 00:19:54.175 3.093 - 3.107: 98.3362% ( 239) 00:19:54.175 3.107 - 3.120: 98.8454% ( 101) 00:19:54.175 3.120 - 3.133: 99.1832% ( 67) 00:19:54.175 3.133 - 3.147: 99.3345% ( 30) 00:19:54.175 3.147 - 3.160: 99.4353% ( 20) 00:19:54.175 3.160 - 3.173: 99.5009% ( 13) 00:19:54.175 3.173 - 3.187: 99.5109% ( 2) 00:19:54.175 3.187 - 3.200: 99.5362% ( 5) 00:19:54.175 3.200 - 3.213: 99.5412% ( 1) 00:19:54.175 3.213 - 3.227: 99.5462% ( 1) 00:19:54.175 3.227 - 3.240: 99.5513% ( 1) 00:19:54.175 3.280 - 3.293: 99.5563% ( 1) 00:19:54.175 3.387 - 3.400: 99.5614% ( 1) 00:19:54.175 3.600 - 3.627: 99.5664% ( 1) 00:19:54.175 3.733 - 3.760: 99.5714% ( 1) 00:19:54.175 3.787 - 3.813: 99.5765% ( 1) 00:19:54.175 3.813 - 3.840: 99.5815% ( 1) 00:19:54.175 3.893 - 3.920: 99.5866% ( 1) 00:19:54.175 3.973 - 4.000: 99.5916% ( 1) 00:19:54.175 4.133 - 4.160: 99.5967% ( 1) 00:19:54.175 4.160 - 4.187: 99.6017% ( 1) 00:19:54.175 4.347 - 4.373: 99.6067% ( 1) 00:19:54.175 4.427 - 4.453: 99.6118% ( 1) 00:19:54.175 4.640 - 4.667: 99.6219% ( 2) 00:19:54.175 4.693 - 4.720: 99.6269% ( 1) 00:19:54.175 4.827 - 4.853: 99.6370% ( 2) 00:19:54.175 4.907 - 4.933: 99.6420% ( 1) 00:19:54.175 4.933 - 4.960: 99.6521% ( 2) 00:19:54.175 4.960 - 4.987: 99.6622% ( 2) 00:19:54.175 4.987 - 5.013: 99.6672% ( 1) 00:19:54.175 5.013 - 5.040: 99.6773% ( 2) 00:19:54.175 5.067 - 5.093: 99.6824% ( 1) 00:19:54.175 5.120 - 5.147: 99.6874% ( 1) 00:19:54.175 5.173 - 5.200: 99.6924% ( 1) 00:19:54.175 5.227 - 5.253: 99.6975% ( 1) 00:19:54.175 5.253 - 5.280: 99.7025% ( 1) 00:19:54.175 5.333 - 5.360: 99.7076% ( 1) 00:19:54.175 5.360 - 5.387: 99.7126% ( 1) 00:19:54.175 5.413 - 5.440: 99.7227% ( 2) 00:19:54.175 5.467 - 5.493: 99.7277% ( 1) 00:19:54.175 5.493 - 5.520: 99.7328% ( 1) 00:19:54.175 5.520 - 5.547: 99.7429% ( 2) 00:19:54.175 5.547 - 5.573: 99.7479% ( 1) 00:19:54.175 5.573 - 5.600: 99.7580% ( 2) 00:19:54.175 5.600 - 5.627: 99.7630% ( 1) 00:19:54.175 5.627 - 5.653: 99.7681% ( 1) 00:19:54.175 5.680 - 5.707: 99.7782% ( 2) 00:19:54.176 5.733 - 5.760: 99.7832% ( 1) 00:19:54.176 5.760 - 5.787: 99.7933% ( 2) 00:19:54.176 5.813 - 5.840: 99.8034% ( 2) 00:19:54.176 5.867 - 5.893: 99.8084% ( 1) 00:19:54.176 5.893 - 5.920: 99.8185% ( 2) 00:19:54.176 5.973 - 6.000: 99.8235% ( 1) 00:19:54.176 6.000 - 6.027: 99.8286% ( 1) 00:19:54.176 6.080 - 6.107: 99.8336% ( 1) 00:19:54.176 6.107 - 6.133: 99.8437% ( 2) 00:19:54.176 6.133 - 6.160: 99.8538% ( 2) 00:19:54.176 6.160 - 6.187: 99.8588% ( 1) 00:19:54.176 6.213 - 6.240: 99.8639% ( 1) 00:19:54.176 6.293 - 6.320: 99.8740% ( 2) 00:19:54.437 [2024-12-06 14:13:42.814443] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.437 6.720 - 6.747: 99.8790% ( 1) 00:19:54.437 11.680 - 11.733: 99.8840% ( 1) 00:19:54.437 3290.453 - 3304.107: 99.8891% ( 1) 00:19:54.437 3986.773 - 4014.080: 100.0000% ( 22) 00:19:54.437 00:19:54.437 Complete histogram 00:19:54.437 ================== 00:19:54.437 Range in us Cumulative Count 00:19:54.437 1.620 - 1.627: 0.0050% ( 1) 00:19:54.437 1.627 - 1.633: 0.0101% ( 1) 00:19:54.437 1.640 - 1.647: 0.7008% ( 137) 00:19:54.437 1.647 - 1.653: 0.9327% ( 46) 00:19:54.437 1.653 - 1.660: 0.9731% ( 8) 00:19:54.437 1.660 - 1.667: 1.2252% ( 50) 00:19:54.437 1.667 - 1.673: 1.2907% ( 13) 00:19:54.437 1.673 - 1.680: 1.3210% ( 6) 00:19:54.437 1.680 - 1.687: 1.3411% ( 4) 00:19:54.437 1.687 - 1.693: 1.3563% ( 3) 00:19:54.437 1.700 - 1.707: 1.5478% ( 38) 00:19:54.437 1.707 - 1.720: 41.5196% ( 7928) 00:19:54.437 1.720 - 1.733: 67.3238% ( 5118) 00:19:54.437 1.733 - 1.747: 80.3721% ( 2588) 00:19:54.437 1.747 - 1.760: 84.0426% ( 728) 00:19:54.437 1.760 - 1.773: 86.0795% ( 404) 00:19:54.437 1.773 - 1.787: 90.6726% ( 911) 00:19:54.437 1.787 - 1.800: 95.5027% ( 958) 00:19:54.437 1.800 - 1.813: 98.2706% ( 549) 00:19:54.437 1.813 - 1.827: 99.2790% ( 200) 00:19:54.437 1.827 - 1.840: 99.4555% ( 35) 00:19:54.437 1.840 - 1.853: 99.4605% ( 1) 00:19:54.437 3.627 - 3.653: 99.4656% ( 1) 00:19:54.437 3.787 - 3.813: 99.4756% ( 2) 00:19:54.437 3.867 - 3.893: 99.4807% ( 1) 00:19:54.437 3.920 - 3.947: 99.4908% ( 2) 00:19:54.437 4.053 - 4.080: 99.4958% ( 1) 00:19:54.437 4.080 - 4.107: 99.5059% ( 2) 00:19:54.437 4.240 - 4.267: 99.5109% ( 1) 00:19:54.437 4.507 - 4.533: 99.5210% ( 2) 00:19:54.438 4.560 - 4.587: 99.5311% ( 2) 00:19:54.438 4.613 - 4.640: 99.5362% ( 1) 00:19:54.438 4.667 - 4.693: 99.5412% ( 1) 00:19:54.438 4.720 - 4.747: 99.5462% ( 1) 00:19:54.438 4.773 - 4.800: 99.5513% ( 1) 00:19:54.438 4.853 - 4.880: 99.5563% ( 1) 00:19:54.438 4.907 - 4.933: 99.5614% ( 1) 00:19:54.438 4.933 - 4.960: 99.5664% ( 1) 00:19:54.438 5.013 - 5.040: 99.5714% ( 1) 00:19:54.438 5.360 - 5.387: 99.5765% ( 1) 00:19:54.438 6.027 - 6.053: 99.5815% ( 1) 00:19:54.438 7.787 - 7.840: 99.5866% ( 1) 00:19:54.438 9.280 - 9.333: 99.5916% ( 1) 00:19:54.438 10.667 - 10.720: 99.5967% ( 1) 00:19:54.438 10.933 - 10.987: 99.6017% ( 1) 00:19:54.438 3986.773 - 4014.080: 99.9899% ( 77) 00:19:54.438 4014.080 - 4041.387: 100.0000% ( 2) 00:19:54.438 00:19:54.438 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:54.438 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:54.438 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:54.438 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:54.438 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:54.438 [ 00:19:54.438 { 00:19:54.438 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:54.438 "subtype": "Discovery", 00:19:54.438 "listen_addresses": [], 00:19:54.438 "allow_any_host": true, 00:19:54.438 "hosts": [] 00:19:54.438 }, 00:19:54.438 { 00:19:54.438 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:54.438 "subtype": "NVMe", 00:19:54.438 "listen_addresses": [ 00:19:54.438 { 00:19:54.438 "trtype": "VFIOUSER", 00:19:54.438 "adrfam": "IPv4", 00:19:54.438 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:54.438 "trsvcid": "0" 00:19:54.438 } 00:19:54.438 ], 00:19:54.438 "allow_any_host": true, 00:19:54.438 "hosts": [], 00:19:54.438 "serial_number": "SPDK1", 00:19:54.438 "model_number": "SPDK bdev Controller", 00:19:54.438 "max_namespaces": 32, 00:19:54.438 "min_cntlid": 1, 00:19:54.438 "max_cntlid": 65519, 00:19:54.438 "namespaces": [ 00:19:54.438 { 00:19:54.438 "nsid": 1, 00:19:54.438 "bdev_name": "Malloc1", 00:19:54.438 "name": "Malloc1", 00:19:54.438 "nguid": "C2F50CAF9F7A4E848854C44141A89713", 00:19:54.438 "uuid": "c2f50caf-9f7a-4e84-8854-c44141a89713" 00:19:54.438 } 00:19:54.438 ] 00:19:54.438 }, 00:19:54.438 { 00:19:54.438 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:54.438 "subtype": "NVMe", 00:19:54.438 "listen_addresses": [ 00:19:54.438 { 00:19:54.438 "trtype": "VFIOUSER", 00:19:54.438 "adrfam": "IPv4", 00:19:54.438 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:54.438 "trsvcid": "0" 00:19:54.438 } 00:19:54.438 ], 00:19:54.438 "allow_any_host": true, 00:19:54.438 "hosts": [], 00:19:54.438 "serial_number": "SPDK2", 00:19:54.438 "model_number": "SPDK bdev Controller", 00:19:54.438 "max_namespaces": 32, 00:19:54.438 "min_cntlid": 1, 00:19:54.438 "max_cntlid": 65519, 00:19:54.438 "namespaces": [ 00:19:54.438 { 00:19:54.438 "nsid": 1, 00:19:54.438 "bdev_name": "Malloc2", 00:19:54.438 "name": "Malloc2", 00:19:54.438 "nguid": "D34E6379FE904A31A58BFB34F02B483E", 00:19:54.438 "uuid": "d34e6379-fe90-4a31-a58b-fb34f02b483e" 00:19:54.438 } 00:19:54.438 ] 00:19:54.438 } 00:19:54.438 ] 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2783873 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:54.438 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:54.699 [2024-12-06 14:13:43.194893] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.699 Malloc3 00:19:54.699 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:54.960 [2024-12-06 14:13:43.381248] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:54.960 Asynchronous Event Request test 00:19:54.960 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.960 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.960 Registering asynchronous event callbacks... 00:19:54.960 Starting namespace attribute notice tests for all controllers... 00:19:54.960 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:54.960 aer_cb - Changed Namespace 00:19:54.960 Cleaning up... 00:19:54.960 [ 00:19:54.960 { 00:19:54.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:54.960 "subtype": "Discovery", 00:19:54.960 "listen_addresses": [], 00:19:54.960 "allow_any_host": true, 00:19:54.960 "hosts": [] 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:54.960 "subtype": "NVMe", 00:19:54.960 "listen_addresses": [ 00:19:54.960 { 00:19:54.960 "trtype": "VFIOUSER", 00:19:54.960 "adrfam": "IPv4", 00:19:54.960 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:54.960 "trsvcid": "0" 00:19:54.960 } 00:19:54.960 ], 00:19:54.960 "allow_any_host": true, 00:19:54.960 "hosts": [], 00:19:54.960 "serial_number": "SPDK1", 00:19:54.960 "model_number": "SPDK bdev Controller", 00:19:54.960 "max_namespaces": 32, 00:19:54.960 "min_cntlid": 1, 00:19:54.960 "max_cntlid": 65519, 00:19:54.960 "namespaces": [ 00:19:54.960 { 00:19:54.960 "nsid": 1, 00:19:54.960 "bdev_name": "Malloc1", 00:19:54.960 "name": "Malloc1", 00:19:54.960 "nguid": "C2F50CAF9F7A4E848854C44141A89713", 00:19:54.960 "uuid": "c2f50caf-9f7a-4e84-8854-c44141a89713" 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "nsid": 2, 00:19:54.960 "bdev_name": "Malloc3", 00:19:54.960 "name": "Malloc3", 00:19:54.960 "nguid": "F6F82F62D6B743D8977CE8455A963D96", 00:19:54.960 "uuid": "f6f82f62-d6b7-43d8-977c-e8455a963d96" 00:19:54.960 } 00:19:54.960 ] 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:54.960 "subtype": "NVMe", 00:19:54.960 "listen_addresses": [ 00:19:54.960 { 00:19:54.960 "trtype": "VFIOUSER", 00:19:54.960 "adrfam": "IPv4", 00:19:54.960 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:54.960 "trsvcid": "0" 00:19:54.960 } 00:19:54.960 ], 00:19:54.960 "allow_any_host": true, 00:19:54.960 "hosts": [], 00:19:54.960 "serial_number": "SPDK2", 00:19:54.960 "model_number": "SPDK bdev Controller", 00:19:54.960 "max_namespaces": 32, 00:19:54.960 "min_cntlid": 1, 00:19:54.960 "max_cntlid": 65519, 00:19:54.960 "namespaces": [ 00:19:54.960 { 00:19:54.960 "nsid": 1, 00:19:54.960 "bdev_name": "Malloc2", 00:19:54.960 "name": "Malloc2", 00:19:54.960 "nguid": "D34E6379FE904A31A58BFB34F02B483E", 00:19:54.960 "uuid": "d34e6379-fe90-4a31-a58b-fb34f02b483e" 00:19:54.960 } 00:19:54.960 ] 00:19:54.960 } 00:19:54.960 ] 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2783873 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:54.960 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:55.224 [2024-12-06 14:13:43.618360] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:19:55.224 [2024-12-06 14:13:43.618403] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783902 ] 00:19:55.224 [2024-12-06 14:13:43.655679] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:55.224 [2024-12-06 14:13:43.660858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:55.224 [2024-12-06 14:13:43.660876] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4cfb525000 00:19:55.224 [2024-12-06 14:13:43.661865] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.662875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.663884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.664886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.665895] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.666896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.667908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.668912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:55.224 [2024-12-06 14:13:43.669922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:55.224 [2024-12-06 14:13:43.669929] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4cfb51a000 00:19:55.224 [2024-12-06 14:13:43.670839] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:55.224 [2024-12-06 14:13:43.684815] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:55.224 [2024-12-06 14:13:43.684838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:55.224 [2024-12-06 14:13:43.686896] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:55.224 [2024-12-06 14:13:43.686927] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:55.224 [2024-12-06 14:13:43.686985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:55.224 [2024-12-06 14:13:43.686994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:55.224 [2024-12-06 14:13:43.686998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:55.224 [2024-12-06 14:13:43.687900] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:55.224 [2024-12-06 14:13:43.687907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:55.224 [2024-12-06 14:13:43.687912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:55.224 [2024-12-06 14:13:43.688902] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:55.224 [2024-12-06 14:13:43.688909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:55.224 [2024-12-06 14:13:43.688915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.689912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:55.224 [2024-12-06 14:13:43.689918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.690934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:55.224 [2024-12-06 14:13:43.690940] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:55.224 [2024-12-06 14:13:43.690944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.690949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.691057] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:55.224 [2024-12-06 14:13:43.691060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.691064] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:55.224 [2024-12-06 14:13:43.691934] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:55.224 [2024-12-06 14:13:43.692937] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:55.224 [2024-12-06 14:13:43.693948] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:55.224 [2024-12-06 14:13:43.694951] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:55.224 [2024-12-06 14:13:43.694982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:55.224 [2024-12-06 14:13:43.695959] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:55.224 [2024-12-06 14:13:43.695965] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:55.224 [2024-12-06 14:13:43.695969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.695984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:55.224 [2024-12-06 14:13:43.695992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.696004] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:55.224 [2024-12-06 14:13:43.696008] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:55.224 [2024-12-06 14:13:43.696011] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.224 [2024-12-06 14:13:43.696020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:55.224 [2024-12-06 14:13:43.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:55.224 [2024-12-06 14:13:43.706469] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:55.224 [2024-12-06 14:13:43.706475] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:55.224 [2024-12-06 14:13:43.706478] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:55.224 [2024-12-06 14:13:43.706482] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:55.224 [2024-12-06 14:13:43.706485] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:55.224 [2024-12-06 14:13:43.706488] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:55.224 [2024-12-06 14:13:43.706492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.706497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.706506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:55.224 [2024-12-06 14:13:43.714459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:55.224 [2024-12-06 14:13:43.714468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.224 [2024-12-06 14:13:43.714474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.224 [2024-12-06 14:13:43.714481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.224 [2024-12-06 14:13:43.714487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.224 [2024-12-06 14:13:43.714490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.714497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:55.224 [2024-12-06 14:13:43.714503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.722464] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:55.225 [2024-12-06 14:13:43.722468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.722473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.722477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.722484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.730459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.730504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.730510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.730515] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:55.225 [2024-12-06 14:13:43.730519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:55.225 [2024-12-06 14:13:43.730521] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.730526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.738458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.738466] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:55.225 [2024-12-06 14:13:43.738476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.738483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.738488] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:55.225 [2024-12-06 14:13:43.738491] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:55.225 [2024-12-06 14:13:43.738494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.738498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.746460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.746470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.746475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.746481] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:55.225 [2024-12-06 14:13:43.746484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:55.225 [2024-12-06 14:13:43.746486] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.746490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.754458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.754465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754492] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:55.225 [2024-12-06 14:13:43.754495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:55.225 [2024-12-06 14:13:43.754499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:55.225 [2024-12-06 14:13:43.754512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.762468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.770459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.770468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.778459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.778468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.786458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.786470] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:55.225 [2024-12-06 14:13:43.786473] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:55.225 [2024-12-06 14:13:43.786475] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:55.225 [2024-12-06 14:13:43.786478] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:55.225 [2024-12-06 14:13:43.786480] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:55.225 [2024-12-06 14:13:43.786485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:55.225 [2024-12-06 14:13:43.786490] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:55.225 [2024-12-06 14:13:43.786493] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:55.225 [2024-12-06 14:13:43.786496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.786500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.786505] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:55.225 [2024-12-06 14:13:43.786508] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:55.225 [2024-12-06 14:13:43.786510] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.786515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.786520] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:55.225 [2024-12-06 14:13:43.786523] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:55.225 [2024-12-06 14:13:43.786526] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:55.225 [2024-12-06 14:13:43.786530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:55.225 [2024-12-06 14:13:43.794458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.794469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.794476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:55.225 [2024-12-06 14:13:43.794481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:55.225 ===================================================== 00:19:55.225 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:55.225 ===================================================== 00:19:55.225 Controller Capabilities/Features 00:19:55.225 ================================ 00:19:55.225 Vendor ID: 4e58 00:19:55.225 Subsystem Vendor ID: 4e58 00:19:55.225 Serial Number: SPDK2 00:19:55.225 Model Number: SPDK bdev Controller 00:19:55.225 Firmware Version: 25.01 00:19:55.225 Recommended Arb Burst: 6 00:19:55.225 IEEE OUI Identifier: 8d 6b 50 00:19:55.225 Multi-path I/O 00:19:55.225 May have multiple subsystem ports: Yes 00:19:55.225 May have multiple controllers: Yes 00:19:55.225 Associated with SR-IOV VF: No 00:19:55.225 Max Data Transfer Size: 131072 00:19:55.225 Max Number of Namespaces: 32 00:19:55.225 Max Number of I/O Queues: 127 00:19:55.225 NVMe Specification Version (VS): 1.3 00:19:55.225 NVMe Specification Version (Identify): 1.3 00:19:55.225 Maximum Queue Entries: 256 00:19:55.225 Contiguous Queues Required: Yes 00:19:55.225 Arbitration Mechanisms Supported 00:19:55.225 Weighted Round Robin: Not Supported 00:19:55.225 Vendor Specific: Not Supported 00:19:55.225 Reset Timeout: 15000 ms 00:19:55.225 Doorbell Stride: 4 bytes 00:19:55.225 NVM Subsystem Reset: Not Supported 00:19:55.225 Command Sets Supported 00:19:55.225 NVM Command Set: Supported 00:19:55.226 Boot Partition: Not Supported 00:19:55.226 Memory Page Size Minimum: 4096 bytes 00:19:55.226 Memory Page Size Maximum: 4096 bytes 00:19:55.226 Persistent Memory Region: Not Supported 00:19:55.226 Optional Asynchronous Events Supported 00:19:55.226 Namespace Attribute Notices: Supported 00:19:55.226 Firmware Activation Notices: Not Supported 00:19:55.226 ANA Change Notices: Not Supported 00:19:55.226 PLE Aggregate Log Change Notices: Not Supported 00:19:55.226 LBA Status Info Alert Notices: Not Supported 00:19:55.226 EGE Aggregate Log Change Notices: Not Supported 00:19:55.226 Normal NVM Subsystem Shutdown event: Not Supported 00:19:55.226 Zone Descriptor Change Notices: Not Supported 00:19:55.226 Discovery Log Change Notices: Not Supported 00:19:55.226 Controller Attributes 00:19:55.226 128-bit Host Identifier: Supported 00:19:55.226 Non-Operational Permissive Mode: Not Supported 00:19:55.226 NVM Sets: Not Supported 00:19:55.226 Read Recovery Levels: Not Supported 00:19:55.226 Endurance Groups: Not Supported 00:19:55.226 Predictable Latency Mode: Not Supported 00:19:55.226 Traffic Based Keep ALive: Not Supported 00:19:55.226 Namespace Granularity: Not Supported 00:19:55.226 SQ Associations: Not Supported 00:19:55.226 UUID List: Not Supported 00:19:55.226 Multi-Domain Subsystem: Not Supported 00:19:55.226 Fixed Capacity Management: Not Supported 00:19:55.226 Variable Capacity Management: Not Supported 00:19:55.226 Delete Endurance Group: Not Supported 00:19:55.226 Delete NVM Set: Not Supported 00:19:55.226 Extended LBA Formats Supported: Not Supported 00:19:55.226 Flexible Data Placement Supported: Not Supported 00:19:55.226 00:19:55.226 Controller Memory Buffer Support 00:19:55.226 ================================ 00:19:55.226 Supported: No 00:19:55.226 00:19:55.226 Persistent Memory Region Support 00:19:55.226 ================================ 00:19:55.226 Supported: No 00:19:55.226 00:19:55.226 Admin Command Set Attributes 00:19:55.226 ============================ 00:19:55.226 Security Send/Receive: Not Supported 00:19:55.226 Format NVM: Not Supported 00:19:55.226 Firmware Activate/Download: Not Supported 00:19:55.226 Namespace Management: Not Supported 00:19:55.226 Device Self-Test: Not Supported 00:19:55.226 Directives: Not Supported 00:19:55.226 NVMe-MI: Not Supported 00:19:55.226 Virtualization Management: Not Supported 00:19:55.226 Doorbell Buffer Config: Not Supported 00:19:55.226 Get LBA Status Capability: Not Supported 00:19:55.226 Command & Feature Lockdown Capability: Not Supported 00:19:55.226 Abort Command Limit: 4 00:19:55.226 Async Event Request Limit: 4 00:19:55.226 Number of Firmware Slots: N/A 00:19:55.226 Firmware Slot 1 Read-Only: N/A 00:19:55.226 Firmware Activation Without Reset: N/A 00:19:55.226 Multiple Update Detection Support: N/A 00:19:55.226 Firmware Update Granularity: No Information Provided 00:19:55.226 Per-Namespace SMART Log: No 00:19:55.226 Asymmetric Namespace Access Log Page: Not Supported 00:19:55.226 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:55.226 Command Effects Log Page: Supported 00:19:55.226 Get Log Page Extended Data: Supported 00:19:55.226 Telemetry Log Pages: Not Supported 00:19:55.226 Persistent Event Log Pages: Not Supported 00:19:55.226 Supported Log Pages Log Page: May Support 00:19:55.226 Commands Supported & Effects Log Page: Not Supported 00:19:55.226 Feature Identifiers & Effects Log Page:May Support 00:19:55.226 NVMe-MI Commands & Effects Log Page: May Support 00:19:55.226 Data Area 4 for Telemetry Log: Not Supported 00:19:55.226 Error Log Page Entries Supported: 128 00:19:55.226 Keep Alive: Supported 00:19:55.226 Keep Alive Granularity: 10000 ms 00:19:55.226 00:19:55.226 NVM Command Set Attributes 00:19:55.226 ========================== 00:19:55.226 Submission Queue Entry Size 00:19:55.226 Max: 64 00:19:55.226 Min: 64 00:19:55.226 Completion Queue Entry Size 00:19:55.226 Max: 16 00:19:55.226 Min: 16 00:19:55.226 Number of Namespaces: 32 00:19:55.226 Compare Command: Supported 00:19:55.226 Write Uncorrectable Command: Not Supported 00:19:55.226 Dataset Management Command: Supported 00:19:55.226 Write Zeroes Command: Supported 00:19:55.226 Set Features Save Field: Not Supported 00:19:55.226 Reservations: Not Supported 00:19:55.226 Timestamp: Not Supported 00:19:55.226 Copy: Supported 00:19:55.226 Volatile Write Cache: Present 00:19:55.226 Atomic Write Unit (Normal): 1 00:19:55.226 Atomic Write Unit (PFail): 1 00:19:55.226 Atomic Compare & Write Unit: 1 00:19:55.226 Fused Compare & Write: Supported 00:19:55.226 Scatter-Gather List 00:19:55.226 SGL Command Set: Supported (Dword aligned) 00:19:55.226 SGL Keyed: Not Supported 00:19:55.226 SGL Bit Bucket Descriptor: Not Supported 00:19:55.226 SGL Metadata Pointer: Not Supported 00:19:55.226 Oversized SGL: Not Supported 00:19:55.226 SGL Metadata Address: Not Supported 00:19:55.226 SGL Offset: Not Supported 00:19:55.226 Transport SGL Data Block: Not Supported 00:19:55.226 Replay Protected Memory Block: Not Supported 00:19:55.226 00:19:55.226 Firmware Slot Information 00:19:55.226 ========================= 00:19:55.226 Active slot: 1 00:19:55.226 Slot 1 Firmware Revision: 25.01 00:19:55.226 00:19:55.226 00:19:55.226 Commands Supported and Effects 00:19:55.226 ============================== 00:19:55.226 Admin Commands 00:19:55.226 -------------- 00:19:55.226 Get Log Page (02h): Supported 00:19:55.226 Identify (06h): Supported 00:19:55.226 Abort (08h): Supported 00:19:55.226 Set Features (09h): Supported 00:19:55.226 Get Features (0Ah): Supported 00:19:55.226 Asynchronous Event Request (0Ch): Supported 00:19:55.226 Keep Alive (18h): Supported 00:19:55.226 I/O Commands 00:19:55.226 ------------ 00:19:55.226 Flush (00h): Supported LBA-Change 00:19:55.226 Write (01h): Supported LBA-Change 00:19:55.226 Read (02h): Supported 00:19:55.226 Compare (05h): Supported 00:19:55.226 Write Zeroes (08h): Supported LBA-Change 00:19:55.226 Dataset Management (09h): Supported LBA-Change 00:19:55.226 Copy (19h): Supported LBA-Change 00:19:55.226 00:19:55.226 Error Log 00:19:55.226 ========= 00:19:55.226 00:19:55.226 Arbitration 00:19:55.226 =========== 00:19:55.226 Arbitration Burst: 1 00:19:55.226 00:19:55.226 Power Management 00:19:55.226 ================ 00:19:55.226 Number of Power States: 1 00:19:55.226 Current Power State: Power State #0 00:19:55.226 Power State #0: 00:19:55.226 Max Power: 0.00 W 00:19:55.226 Non-Operational State: Operational 00:19:55.226 Entry Latency: Not Reported 00:19:55.226 Exit Latency: Not Reported 00:19:55.226 Relative Read Throughput: 0 00:19:55.226 Relative Read Latency: 0 00:19:55.226 Relative Write Throughput: 0 00:19:55.226 Relative Write Latency: 0 00:19:55.226 Idle Power: Not Reported 00:19:55.226 Active Power: Not Reported 00:19:55.226 Non-Operational Permissive Mode: Not Supported 00:19:55.226 00:19:55.226 Health Information 00:19:55.226 ================== 00:19:55.226 Critical Warnings: 00:19:55.226 Available Spare Space: OK 00:19:55.226 Temperature: OK 00:19:55.226 Device Reliability: OK 00:19:55.226 Read Only: No 00:19:55.226 Volatile Memory Backup: OK 00:19:55.226 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:55.226 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:55.226 Available Spare: 0% 00:19:55.226 Available Sp[2024-12-06 14:13:43.794552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:55.226 [2024-12-06 14:13:43.802458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:55.226 [2024-12-06 14:13:43.802482] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:55.226 [2024-12-06 14:13:43.802489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.226 [2024-12-06 14:13:43.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.226 [2024-12-06 14:13:43.802498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.226 [2024-12-06 14:13:43.802503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.226 [2024-12-06 14:13:43.802532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:55.226 [2024-12-06 14:13:43.802540] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:55.226 [2024-12-06 14:13:43.803537] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:55.226 [2024-12-06 14:13:43.803571] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:55.226 [2024-12-06 14:13:43.803576] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:55.226 [2024-12-06 14:13:43.804538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:55.227 [2024-12-06 14:13:43.804546] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:55.227 [2024-12-06 14:13:43.804590] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:55.227 [2024-12-06 14:13:43.805563] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:55.227 are Threshold: 0% 00:19:55.227 Life Percentage Used: 0% 00:19:55.227 Data Units Read: 0 00:19:55.227 Data Units Written: 0 00:19:55.227 Host Read Commands: 0 00:19:55.227 Host Write Commands: 0 00:19:55.227 Controller Busy Time: 0 minutes 00:19:55.227 Power Cycles: 0 00:19:55.227 Power On Hours: 0 hours 00:19:55.227 Unsafe Shutdowns: 0 00:19:55.227 Unrecoverable Media Errors: 0 00:19:55.227 Lifetime Error Log Entries: 0 00:19:55.227 Warning Temperature Time: 0 minutes 00:19:55.227 Critical Temperature Time: 0 minutes 00:19:55.227 00:19:55.227 Number of Queues 00:19:55.227 ================ 00:19:55.227 Number of I/O Submission Queues: 127 00:19:55.227 Number of I/O Completion Queues: 127 00:19:55.227 00:19:55.227 Active Namespaces 00:19:55.227 ================= 00:19:55.227 Namespace ID:1 00:19:55.227 Error Recovery Timeout: Unlimited 00:19:55.227 Command Set Identifier: NVM (00h) 00:19:55.227 Deallocate: Supported 00:19:55.227 Deallocated/Unwritten Error: Not Supported 00:19:55.227 Deallocated Read Value: Unknown 00:19:55.227 Deallocate in Write Zeroes: Not Supported 00:19:55.227 Deallocated Guard Field: 0xFFFF 00:19:55.227 Flush: Supported 00:19:55.227 Reservation: Supported 00:19:55.227 Namespace Sharing Capabilities: Multiple Controllers 00:19:55.227 Size (in LBAs): 131072 (0GiB) 00:19:55.227 Capacity (in LBAs): 131072 (0GiB) 00:19:55.227 Utilization (in LBAs): 131072 (0GiB) 00:19:55.227 NGUID: D34E6379FE904A31A58BFB34F02B483E 00:19:55.227 UUID: d34e6379-fe90-4a31-a58b-fb34f02b483e 00:19:55.227 Thin Provisioning: Not Supported 00:19:55.227 Per-NS Atomic Units: Yes 00:19:55.227 Atomic Boundary Size (Normal): 0 00:19:55.227 Atomic Boundary Size (PFail): 0 00:19:55.227 Atomic Boundary Offset: 0 00:19:55.227 Maximum Single Source Range Length: 65535 00:19:55.227 Maximum Copy Length: 65535 00:19:55.227 Maximum Source Range Count: 1 00:19:55.227 NGUID/EUI64 Never Reused: No 00:19:55.227 Namespace Write Protected: No 00:19:55.227 Number of LBA Formats: 1 00:19:55.227 Current LBA Format: LBA Format #00 00:19:55.227 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:55.227 00:19:55.227 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:55.488 [2024-12-06 14:13:43.995838] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:00.771 Initializing NVMe Controllers 00:20:00.771 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:00.771 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:00.771 Initialization complete. Launching workers. 00:20:00.771 ======================================================== 00:20:00.771 Latency(us) 00:20:00.771 Device Information : IOPS MiB/s Average min max 00:20:00.772 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.17 156.18 3201.38 866.39 9758.54 00:20:00.772 ======================================================== 00:20:00.772 Total : 39981.17 156.18 3201.38 866.39 9758.54 00:20:00.772 00:20:00.772 [2024-12-06 14:13:49.101660] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:00.772 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:00.772 [2024-12-06 14:13:49.292239] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:06.212 Initializing NVMe Controllers 00:20:06.212 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:06.212 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:06.212 Initialization complete. Launching workers. 00:20:06.212 ======================================================== 00:20:06.212 Latency(us) 00:20:06.212 Device Information : IOPS MiB/s Average min max 00:20:06.212 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40026.60 156.35 3198.42 865.52 7742.92 00:20:06.212 ======================================================== 00:20:06.212 Total : 40026.60 156.35 3198.42 865.52 7742.92 00:20:06.212 00:20:06.212 [2024-12-06 14:13:54.311528] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:06.212 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:06.212 [2024-12-06 14:13:54.511728] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:11.508 [2024-12-06 14:13:59.639564] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:11.508 Initializing NVMe Controllers 00:20:11.508 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:11.508 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:11.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:11.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:11.508 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:11.508 Initialization complete. Launching workers. 00:20:11.508 Starting thread on core 2 00:20:11.508 Starting thread on core 3 00:20:11.508 Starting thread on core 1 00:20:11.508 14:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:11.508 [2024-12-06 14:13:59.887844] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:15.715 [2024-12-06 14:14:03.672590] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:15.715 Initializing NVMe Controllers 00:20:15.715 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:15.715 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:15.715 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:15.715 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:15.715 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:15.715 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:15.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:15.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:15.715 Initialization complete. Launching workers. 00:20:15.715 Starting thread on core 1 with urgent priority queue 00:20:15.715 Starting thread on core 2 with urgent priority queue 00:20:15.715 Starting thread on core 3 with urgent priority queue 00:20:15.715 Starting thread on core 0 with urgent priority queue 00:20:15.715 SPDK bdev Controller (SPDK2 ) core 0: 3912.67 IO/s 25.56 secs/100000 ios 00:20:15.715 SPDK bdev Controller (SPDK2 ) core 1: 4101.67 IO/s 24.38 secs/100000 ios 00:20:15.715 SPDK bdev Controller (SPDK2 ) core 2: 2190.33 IO/s 45.66 secs/100000 ios 00:20:15.715 SPDK bdev Controller (SPDK2 ) core 3: 2417.33 IO/s 41.37 secs/100000 ios 00:20:15.715 ======================================================== 00:20:15.715 00:20:15.715 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:15.715 [2024-12-06 14:14:03.913821] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:15.715 Initializing NVMe Controllers 00:20:15.715 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:15.715 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:15.715 Namespace ID: 1 size: 0GB 00:20:15.715 Initialization complete. 00:20:15.715 INFO: using host memory buffer for IO 00:20:15.715 Hello world! 00:20:15.715 [2024-12-06 14:14:03.923889] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:15.715 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:15.715 [2024-12-06 14:14:04.159283] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:16.657 Initializing NVMe Controllers 00:20:16.657 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.657 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:16.657 Initialization complete. Launching workers. 00:20:16.657 submit (in ns) avg, min, max = 6185.7, 2831.7, 3998847.5 00:20:16.657 complete (in ns) avg, min, max = 16368.2, 1632.5, 4002909.2 00:20:16.657 00:20:16.657 Submit histogram 00:20:16.657 ================ 00:20:16.657 Range in us Cumulative Count 00:20:16.657 2.827 - 2.840: 0.1155% ( 23) 00:20:16.657 2.840 - 2.853: 0.8935% ( 155) 00:20:16.657 2.853 - 2.867: 3.3231% ( 484) 00:20:16.657 2.867 - 2.880: 9.2766% ( 1186) 00:20:16.657 2.880 - 2.893: 14.5977% ( 1060) 00:20:16.657 2.893 - 2.907: 20.2650% ( 1129) 00:20:16.657 2.907 - 2.920: 25.8571% ( 1114) 00:20:16.657 2.920 - 2.933: 31.6199% ( 1148) 00:20:16.657 2.933 - 2.947: 37.2019% ( 1112) 00:20:16.657 2.947 - 2.960: 41.8001% ( 916) 00:20:16.657 2.960 - 2.973: 46.4334% ( 923) 00:20:16.657 2.973 - 2.987: 51.3930% ( 988) 00:20:16.657 2.987 - 3.000: 60.0823% ( 1731) 00:20:16.657 3.000 - 3.013: 69.8108% ( 1938) 00:20:16.657 3.013 - 3.027: 78.8013% ( 1791) 00:20:16.657 3.027 - 3.040: 85.8792% ( 1410) 00:20:16.657 3.040 - 3.053: 91.3358% ( 1087) 00:20:16.657 3.053 - 3.067: 95.3115% ( 792) 00:20:16.657 3.067 - 3.080: 97.6206% ( 460) 00:20:16.657 3.080 - 3.093: 98.6898% ( 213) 00:20:16.657 3.093 - 3.107: 99.3424% ( 130) 00:20:16.657 3.107 - 3.120: 99.4980% ( 31) 00:20:16.657 3.120 - 3.133: 99.5633% ( 13) 00:20:16.657 3.133 - 3.147: 99.6034% ( 8) 00:20:16.657 3.147 - 3.160: 99.6085% ( 1) 00:20:16.657 3.160 - 3.173: 99.6185% ( 2) 00:20:16.657 3.187 - 3.200: 99.6285% ( 2) 00:20:16.657 3.467 - 3.493: 99.6386% ( 2) 00:20:16.657 3.493 - 3.520: 99.6436% ( 1) 00:20:16.657 3.547 - 3.573: 99.6486% ( 1) 00:20:16.657 3.653 - 3.680: 99.6587% ( 2) 00:20:16.657 3.760 - 3.787: 99.6637% ( 1) 00:20:16.657 3.867 - 3.893: 99.6737% ( 2) 00:20:16.657 4.213 - 4.240: 99.6787% ( 1) 00:20:16.657 4.560 - 4.587: 99.6838% ( 1) 00:20:16.657 4.693 - 4.720: 99.6988% ( 3) 00:20:16.657 4.720 - 4.747: 99.7038% ( 1) 00:20:16.657 4.853 - 4.880: 99.7189% ( 3) 00:20:16.657 4.880 - 4.907: 99.7239% ( 1) 00:20:16.657 4.933 - 4.960: 99.7289% ( 1) 00:20:16.657 5.040 - 5.067: 99.7490% ( 4) 00:20:16.657 5.067 - 5.093: 99.7590% ( 2) 00:20:16.657 5.093 - 5.120: 99.7641% ( 1) 00:20:16.657 5.120 - 5.147: 99.7691% ( 1) 00:20:16.657 5.147 - 5.173: 99.7741% ( 1) 00:20:16.657 5.173 - 5.200: 99.7791% ( 1) 00:20:16.657 5.200 - 5.227: 99.7841% ( 1) 00:20:16.657 5.333 - 5.360: 99.7942% ( 2) 00:20:16.657 5.467 - 5.493: 99.8042% ( 2) 00:20:16.657 5.520 - 5.547: 99.8143% ( 2) 00:20:16.657 5.547 - 5.573: 99.8193% ( 1) 00:20:16.657 5.600 - 5.627: 99.8243% ( 1) 00:20:16.657 5.653 - 5.680: 99.8293% ( 1) 00:20:16.657 5.680 - 5.707: 99.8343% ( 1) 00:20:16.657 5.707 - 5.733: 99.8394% ( 1) 00:20:16.657 5.787 - 5.813: 99.8444% ( 1) 00:20:16.657 5.867 - 5.893: 99.8494% ( 1) 00:20:16.657 5.893 - 5.920: 99.8544% ( 1) 00:20:16.657 5.947 - 5.973: 99.8594% ( 1) 00:20:16.657 5.973 - 6.000: 99.8645% ( 1) 00:20:16.657 6.080 - 6.107: 99.8695% ( 1) 00:20:16.657 6.133 - 6.160: 99.8745% ( 1) 00:20:16.657 6.187 - 6.213: 99.8795% ( 1) 00:20:16.657 6.293 - 6.320: 99.8845% ( 1) 00:20:16.657 6.373 - 6.400: 99.8896% ( 1) 00:20:16.657 6.427 - 6.453: 99.8996% ( 2) 00:20:16.657 6.720 - 6.747: 99.9046% ( 1) 00:20:16.657 7.200 - 7.253: 99.9096% ( 1) 00:20:16.658 7.467 - 7.520: 99.9147% ( 1) 00:20:16.658 9.707 - 9.760: 99.9197% ( 1) 00:20:16.658 3986.773 - 4014.080: 100.0000% ( 16) 00:20:16.658 00:20:16.658 Complete histogram 00:20:16.658 ================== 00:20:16.658 Range in us Cumulative Count 00:20:16.658 1.627 - 1.633: 0.0050% ( 1) 00:20:16.658 1.633 - 1.640: 0.0100% ( 1) 00:20:16.658 1.640 - 1.647: 0.8935% ( 176) 00:20:16.658 1.647 - 1.653: 1.2550% ( 72) 00:20:16.658 1.653 - 1.660: 1.3202% ( 13) 00:20:16.658 1.660 - 1.667: 1.4909% ( 34) 00:20:16.658 1.667 - [2024-12-06 14:14:05.252984] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:16.658 1.673: 1.5461% ( 11) 00:20:16.658 1.673 - 1.680: 1.5963% ( 10) 00:20:16.658 1.680 - 1.687: 1.6264% ( 6) 00:20:16.658 1.687 - 1.693: 1.6415% ( 3) 00:20:16.658 1.693 - 1.700: 1.6616% ( 4) 00:20:16.658 1.700 - 1.707: 20.5462% ( 3762) 00:20:16.658 1.707 - 1.720: 47.2065% ( 5311) 00:20:16.658 1.720 - 1.733: 73.8768% ( 5313) 00:20:16.658 1.733 - 1.747: 82.9778% ( 1813) 00:20:16.658 1.747 - 1.760: 84.5741% ( 318) 00:20:16.658 1.760 - 1.773: 87.4354% ( 570) 00:20:16.658 1.773 - 1.787: 91.8729% ( 884) 00:20:16.658 1.787 - 1.800: 96.1297% ( 848) 00:20:16.658 1.800 - 1.813: 98.5744% ( 487) 00:20:16.658 1.813 - 1.827: 99.3173% ( 148) 00:20:16.658 1.827 - 1.840: 99.4880% ( 34) 00:20:16.658 1.840 - 1.853: 99.4980% ( 2) 00:20:16.658 3.107 - 3.120: 99.5030% ( 1) 00:20:16.658 3.267 - 3.280: 99.5081% ( 1) 00:20:16.658 3.627 - 3.653: 99.5131% ( 1) 00:20:16.658 3.680 - 3.707: 99.5181% ( 1) 00:20:16.658 3.707 - 3.733: 99.5231% ( 1) 00:20:16.658 3.813 - 3.840: 99.5332% ( 2) 00:20:16.658 3.867 - 3.893: 99.5382% ( 1) 00:20:16.658 3.893 - 3.920: 99.5432% ( 1) 00:20:16.658 3.947 - 3.973: 99.5482% ( 1) 00:20:16.658 4.027 - 4.053: 99.5583% ( 2) 00:20:16.658 4.213 - 4.240: 99.5633% ( 1) 00:20:16.658 4.267 - 4.293: 99.5683% ( 1) 00:20:16.658 4.320 - 4.347: 99.5733% ( 1) 00:20:16.658 4.400 - 4.427: 99.5783% ( 1) 00:20:16.658 4.667 - 4.693: 99.5834% ( 1) 00:20:16.658 4.800 - 4.827: 99.5884% ( 1) 00:20:16.658 4.880 - 4.907: 99.5934% ( 1) 00:20:16.658 4.933 - 4.960: 99.5984% ( 1) 00:20:16.658 5.067 - 5.093: 99.6085% ( 2) 00:20:16.658 5.547 - 5.573: 99.6135% ( 1) 00:20:16.658 6.187 - 6.213: 99.6185% ( 1) 00:20:16.658 9.120 - 9.173: 99.6235% ( 1) 00:20:16.658 10.133 - 10.187: 99.6285% ( 1) 00:20:16.658 10.987 - 11.040: 99.6336% ( 1) 00:20:16.658 3986.773 - 4014.080: 100.0000% ( 73) 00:20:16.658 00:20:16.658 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:16.658 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:16.658 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:16.658 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:16.658 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:16.919 [ 00:20:16.919 { 00:20:16.919 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:16.919 "subtype": "Discovery", 00:20:16.919 "listen_addresses": [], 00:20:16.919 "allow_any_host": true, 00:20:16.919 "hosts": [] 00:20:16.919 }, 00:20:16.919 { 00:20:16.919 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:16.919 "subtype": "NVMe", 00:20:16.919 "listen_addresses": [ 00:20:16.919 { 00:20:16.919 "trtype": "VFIOUSER", 00:20:16.919 "adrfam": "IPv4", 00:20:16.919 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:16.919 "trsvcid": "0" 00:20:16.919 } 00:20:16.919 ], 00:20:16.919 "allow_any_host": true, 00:20:16.919 "hosts": [], 00:20:16.919 "serial_number": "SPDK1", 00:20:16.919 "model_number": "SPDK bdev Controller", 00:20:16.919 "max_namespaces": 32, 00:20:16.919 "min_cntlid": 1, 00:20:16.919 "max_cntlid": 65519, 00:20:16.919 "namespaces": [ 00:20:16.919 { 00:20:16.919 "nsid": 1, 00:20:16.919 "bdev_name": "Malloc1", 00:20:16.919 "name": "Malloc1", 00:20:16.919 "nguid": "C2F50CAF9F7A4E848854C44141A89713", 00:20:16.919 "uuid": "c2f50caf-9f7a-4e84-8854-c44141a89713" 00:20:16.919 }, 00:20:16.919 { 00:20:16.919 "nsid": 2, 00:20:16.919 "bdev_name": "Malloc3", 00:20:16.919 "name": "Malloc3", 00:20:16.919 "nguid": "F6F82F62D6B743D8977CE8455A963D96", 00:20:16.919 "uuid": "f6f82f62-d6b7-43d8-977c-e8455a963d96" 00:20:16.919 } 00:20:16.919 ] 00:20:16.919 }, 00:20:16.919 { 00:20:16.919 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:16.919 "subtype": "NVMe", 00:20:16.919 "listen_addresses": [ 00:20:16.919 { 00:20:16.919 "trtype": "VFIOUSER", 00:20:16.919 "adrfam": "IPv4", 00:20:16.919 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:16.919 "trsvcid": "0" 00:20:16.919 } 00:20:16.919 ], 00:20:16.919 "allow_any_host": true, 00:20:16.919 "hosts": [], 00:20:16.919 "serial_number": "SPDK2", 00:20:16.919 "model_number": "SPDK bdev Controller", 00:20:16.919 "max_namespaces": 32, 00:20:16.919 "min_cntlid": 1, 00:20:16.919 "max_cntlid": 65519, 00:20:16.919 "namespaces": [ 00:20:16.919 { 00:20:16.919 "nsid": 1, 00:20:16.919 "bdev_name": "Malloc2", 00:20:16.919 "name": "Malloc2", 00:20:16.919 "nguid": "D34E6379FE904A31A58BFB34F02B483E", 00:20:16.919 "uuid": "d34e6379-fe90-4a31-a58b-fb34f02b483e" 00:20:16.919 } 00:20:16.919 ] 00:20:16.919 } 00:20:16.919 ] 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2788206 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:16.919 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:17.179 [2024-12-06 14:14:05.630923] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:17.179 Malloc4 00:20:17.179 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:17.440 [2024-12-06 14:14:05.824189] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:17.440 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:17.440 Asynchronous Event Request test 00:20:17.440 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.440 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.440 Registering asynchronous event callbacks... 00:20:17.440 Starting namespace attribute notice tests for all controllers... 00:20:17.440 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:17.440 aer_cb - Changed Namespace 00:20:17.440 Cleaning up... 00:20:17.440 [ 00:20:17.440 { 00:20:17.440 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.440 "subtype": "Discovery", 00:20:17.440 "listen_addresses": [], 00:20:17.440 "allow_any_host": true, 00:20:17.440 "hosts": [] 00:20:17.440 }, 00:20:17.440 { 00:20:17.440 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:17.440 "subtype": "NVMe", 00:20:17.440 "listen_addresses": [ 00:20:17.440 { 00:20:17.440 "trtype": "VFIOUSER", 00:20:17.440 "adrfam": "IPv4", 00:20:17.440 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:17.440 "trsvcid": "0" 00:20:17.440 } 00:20:17.440 ], 00:20:17.440 "allow_any_host": true, 00:20:17.440 "hosts": [], 00:20:17.440 "serial_number": "SPDK1", 00:20:17.440 "model_number": "SPDK bdev Controller", 00:20:17.440 "max_namespaces": 32, 00:20:17.440 "min_cntlid": 1, 00:20:17.440 "max_cntlid": 65519, 00:20:17.440 "namespaces": [ 00:20:17.440 { 00:20:17.440 "nsid": 1, 00:20:17.440 "bdev_name": "Malloc1", 00:20:17.440 "name": "Malloc1", 00:20:17.440 "nguid": "C2F50CAF9F7A4E848854C44141A89713", 00:20:17.440 "uuid": "c2f50caf-9f7a-4e84-8854-c44141a89713" 00:20:17.440 }, 00:20:17.440 { 00:20:17.440 "nsid": 2, 00:20:17.440 "bdev_name": "Malloc3", 00:20:17.440 "name": "Malloc3", 00:20:17.440 "nguid": "F6F82F62D6B743D8977CE8455A963D96", 00:20:17.440 "uuid": "f6f82f62-d6b7-43d8-977c-e8455a963d96" 00:20:17.440 } 00:20:17.440 ] 00:20:17.440 }, 00:20:17.440 { 00:20:17.440 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:17.440 "subtype": "NVMe", 00:20:17.440 "listen_addresses": [ 00:20:17.440 { 00:20:17.440 "trtype": "VFIOUSER", 00:20:17.440 "adrfam": "IPv4", 00:20:17.440 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:17.440 "trsvcid": "0" 00:20:17.440 } 00:20:17.440 ], 00:20:17.440 "allow_any_host": true, 00:20:17.440 "hosts": [], 00:20:17.440 "serial_number": "SPDK2", 00:20:17.440 "model_number": "SPDK bdev Controller", 00:20:17.440 "max_namespaces": 32, 00:20:17.440 "min_cntlid": 1, 00:20:17.440 "max_cntlid": 65519, 00:20:17.440 "namespaces": [ 00:20:17.440 { 00:20:17.440 "nsid": 1, 00:20:17.440 "bdev_name": "Malloc2", 00:20:17.440 "name": "Malloc2", 00:20:17.440 "nguid": "D34E6379FE904A31A58BFB34F02B483E", 00:20:17.440 "uuid": "d34e6379-fe90-4a31-a58b-fb34f02b483e" 00:20:17.441 }, 00:20:17.441 { 00:20:17.441 "nsid": 2, 00:20:17.441 "bdev_name": "Malloc4", 00:20:17.441 "name": "Malloc4", 00:20:17.441 "nguid": "9B93EE2B5A844772989D12A458ABCF2D", 00:20:17.441 "uuid": "9b93ee2b-5a84-4772-989d-12a458abcf2d" 00:20:17.441 } 00:20:17.441 ] 00:20:17.441 } 00:20:17.441 ] 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2788206 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2778863 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2778863 ']' 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2778863 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.441 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2778863 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2778863' 00:20:17.703 killing process with pid 2778863 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2778863 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2778863 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2788276 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2788276' 00:20:17.703 Process pid: 2788276 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2788276 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2788276 ']' 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.703 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:17.703 [2024-12-06 14:14:06.298665] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:17.703 [2024-12-06 14:14:06.299599] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:17.703 [2024-12-06 14:14:06.299642] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.963 [2024-12-06 14:14:06.382945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.963 [2024-12-06 14:14:06.414020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.963 [2024-12-06 14:14:06.414057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.963 [2024-12-06 14:14:06.414063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.963 [2024-12-06 14:14:06.414067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.963 [2024-12-06 14:14:06.414072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.963 [2024-12-06 14:14:06.415328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.963 [2024-12-06 14:14:06.415499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.963 [2024-12-06 14:14:06.415579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.964 [2024-12-06 14:14:06.415580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.964 [2024-12-06 14:14:06.468091] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:17.964 [2024-12-06 14:14:06.468964] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:17.964 [2024-12-06 14:14:06.469941] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:17.964 [2024-12-06 14:14:06.470570] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:17.964 [2024-12-06 14:14:06.470591] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:18.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:18.535 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:19.919 Malloc1 00:20:19.919 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:20.180 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:20.442 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:20.704 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:20.704 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:20.704 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:20.704 Malloc2 00:20:20.704 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:20.965 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2788276 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2788276 ']' 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2788276 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.226 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2788276 00:20:21.487 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.487 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.487 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2788276' 00:20:21.487 killing process with pid 2788276 00:20:21.487 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2788276 00:20:21.487 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2788276 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:21.487 00:20:21.487 real 0m51.716s 00:20:21.487 user 3m18.213s 00:20:21.487 sys 0m2.702s 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:21.487 ************************************ 00:20:21.487 END TEST nvmf_vfio_user 00:20:21.487 ************************************ 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.487 ************************************ 00:20:21.487 START TEST nvmf_vfio_user_nvme_compliance 00:20:21.487 ************************************ 00:20:21.487 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:21.749 * Looking for test storage... 00:20:21.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.749 --rc genhtml_branch_coverage=1 00:20:21.749 --rc genhtml_function_coverage=1 00:20:21.749 --rc genhtml_legend=1 00:20:21.749 --rc geninfo_all_blocks=1 00:20:21.749 --rc geninfo_unexecuted_blocks=1 00:20:21.749 00:20:21.749 ' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.749 --rc genhtml_branch_coverage=1 00:20:21.749 --rc genhtml_function_coverage=1 00:20:21.749 --rc genhtml_legend=1 00:20:21.749 --rc geninfo_all_blocks=1 00:20:21.749 --rc geninfo_unexecuted_blocks=1 00:20:21.749 00:20:21.749 ' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.749 --rc genhtml_branch_coverage=1 00:20:21.749 --rc genhtml_function_coverage=1 00:20:21.749 --rc genhtml_legend=1 00:20:21.749 --rc geninfo_all_blocks=1 00:20:21.749 --rc geninfo_unexecuted_blocks=1 00:20:21.749 00:20:21.749 ' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.749 --rc genhtml_branch_coverage=1 00:20:21.749 --rc genhtml_function_coverage=1 00:20:21.749 --rc genhtml_legend=1 00:20:21.749 --rc geninfo_all_blocks=1 00:20:21.749 --rc geninfo_unexecuted_blocks=1 00:20:21.749 00:20:21.749 ' 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.749 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2789125 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2789125' 00:20:21.750 Process pid: 2789125 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2789125 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2789125 ']' 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.750 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:22.011 [2024-12-06 14:14:10.417395] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:20:22.011 [2024-12-06 14:14:10.417474] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.012 [2024-12-06 14:14:10.507031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.012 [2024-12-06 14:14:10.546637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.012 [2024-12-06 14:14:10.546678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.012 [2024-12-06 14:14:10.546684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.012 [2024-12-06 14:14:10.546689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.012 [2024-12-06 14:14:10.546693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.012 [2024-12-06 14:14:10.548105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.012 [2024-12-06 14:14:10.548262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.012 [2024-12-06 14:14:10.548265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.953 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.953 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:22.953 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 malloc0 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.894 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:23.894 00:20:23.894 00:20:23.894 CUnit - A unit testing framework for C - Version 2.1-3 00:20:23.894 http://cunit.sourceforge.net/ 00:20:23.894 00:20:23.894 00:20:23.894 Suite: nvme_compliance 00:20:23.894 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 14:14:12.472843] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:23.894 [2024-12-06 14:14:12.474124] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:23.894 [2024-12-06 14:14:12.474135] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:23.894 [2024-12-06 14:14:12.474140] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:23.894 [2024-12-06 14:14:12.475857] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:23.894 passed 00:20:24.155 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 14:14:12.552372] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.155 [2024-12-06 14:14:12.555394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.155 passed 00:20:24.155 Test: admin_identify_ns ...[2024-12-06 14:14:12.631810] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.155 [2024-12-06 14:14:12.695464] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:24.155 [2024-12-06 14:14:12.703462] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:24.155 [2024-12-06 14:14:12.724549] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.155 passed 00:20:24.415 Test: admin_get_features_mandatory_features ...[2024-12-06 14:14:12.797807] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.415 [2024-12-06 14:14:12.800827] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.415 passed 00:20:24.415 Test: admin_get_features_optional_features ...[2024-12-06 14:14:12.878295] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.415 [2024-12-06 14:14:12.883326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.415 passed 00:20:24.415 Test: admin_set_features_number_of_queues ...[2024-12-06 14:14:12.957042] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.675 [2024-12-06 14:14:13.061537] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.675 passed 00:20:24.675 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 14:14:13.138567] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.675 [2024-12-06 14:14:13.141586] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.675 passed 00:20:24.675 Test: admin_get_log_page_with_lpo ...[2024-12-06 14:14:13.216339] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.675 [2024-12-06 14:14:13.284462] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:24.675 [2024-12-06 14:14:13.297501] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.955 passed 00:20:24.955 Test: fabric_property_get ...[2024-12-06 14:14:13.370700] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.955 [2024-12-06 14:14:13.371906] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:24.955 [2024-12-06 14:14:13.373729] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.955 passed 00:20:24.955 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 14:14:13.452201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:24.955 [2024-12-06 14:14:13.453401] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:24.955 [2024-12-06 14:14:13.455226] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:24.955 passed 00:20:24.955 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 14:14:13.529949] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.217 [2024-12-06 14:14:13.614463] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.217 [2024-12-06 14:14:13.630460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.217 [2024-12-06 14:14:13.635550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.217 passed 00:20:25.217 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 14:14:13.708807] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.217 [2024-12-06 14:14:13.710011] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:25.217 [2024-12-06 14:14:13.711828] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.217 passed 00:20:25.217 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 14:14:13.786816] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.478 [2024-12-06 14:14:13.866462] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:25.478 [2024-12-06 14:14:13.890486] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:25.478 [2024-12-06 14:14:13.895555] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.478 passed 00:20:25.478 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 14:14:13.967763] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.478 [2024-12-06 14:14:13.968965] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:25.478 [2024-12-06 14:14:13.968984] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:25.478 [2024-12-06 14:14:13.970786] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.478 passed 00:20:25.478 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 14:14:14.046808] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.740 [2024-12-06 14:14:14.142468] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:25.740 [2024-12-06 14:14:14.150459] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:25.740 [2024-12-06 14:14:14.158459] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:25.740 [2024-12-06 14:14:14.166464] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:25.740 [2024-12-06 14:14:14.190524] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.740 passed 00:20:25.740 Test: admin_create_io_sq_verify_pc ...[2024-12-06 14:14:14.266560] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:25.740 [2024-12-06 14:14:14.284466] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:25.740 [2024-12-06 14:14:14.301707] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:25.740 passed 00:20:25.740 Test: admin_create_io_qp_max_qps ...[2024-12-06 14:14:14.376153] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.142 [2024-12-06 14:14:15.480464] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:27.402 [2024-12-06 14:14:15.878355] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.402 passed 00:20:27.402 Test: admin_create_io_sq_shared_cq ...[2024-12-06 14:14:15.953830] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.662 [2024-12-06 14:14:16.086465] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:27.662 [2024-12-06 14:14:16.123513] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.662 passed 00:20:27.662 00:20:27.662 Run Summary: Type Total Ran Passed Failed Inactive 00:20:27.662 suites 1 1 n/a 0 0 00:20:27.662 tests 18 18 18 0 0 00:20:27.663 asserts 360 360 360 0 n/a 00:20:27.663 00:20:27.663 Elapsed time = 1.497 seconds 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2789125 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2789125 ']' 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2789125 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2789125 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2789125' 00:20:27.663 killing process with pid 2789125 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2789125 00:20:27.663 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2789125 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:27.923 00:20:27.923 real 0m6.230s 00:20:27.923 user 0m17.603s 00:20:27.923 sys 0m0.557s 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.923 ************************************ 00:20:27.923 END TEST nvmf_vfio_user_nvme_compliance 00:20:27.923 ************************************ 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.923 14:14:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.924 14:14:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.924 ************************************ 00:20:27.924 START TEST nvmf_vfio_user_fuzz 00:20:27.924 ************************************ 00:20:27.924 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:27.924 * Looking for test storage... 00:20:27.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.924 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:27.924 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:20:27.924 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.186 --rc genhtml_branch_coverage=1 00:20:28.186 --rc genhtml_function_coverage=1 00:20:28.186 --rc genhtml_legend=1 00:20:28.186 --rc geninfo_all_blocks=1 00:20:28.186 --rc geninfo_unexecuted_blocks=1 00:20:28.186 00:20:28.186 ' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.186 --rc genhtml_branch_coverage=1 00:20:28.186 --rc genhtml_function_coverage=1 00:20:28.186 --rc genhtml_legend=1 00:20:28.186 --rc geninfo_all_blocks=1 00:20:28.186 --rc geninfo_unexecuted_blocks=1 00:20:28.186 00:20:28.186 ' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.186 --rc genhtml_branch_coverage=1 00:20:28.186 --rc genhtml_function_coverage=1 00:20:28.186 --rc genhtml_legend=1 00:20:28.186 --rc geninfo_all_blocks=1 00:20:28.186 --rc geninfo_unexecuted_blocks=1 00:20:28.186 00:20:28.186 ' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.186 --rc genhtml_branch_coverage=1 00:20:28.186 --rc genhtml_function_coverage=1 00:20:28.186 --rc genhtml_legend=1 00:20:28.186 --rc geninfo_all_blocks=1 00:20:28.186 --rc geninfo_unexecuted_blocks=1 00:20:28.186 00:20:28.186 ' 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.186 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2790440 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2790440' 00:20:28.187 Process pid: 2790440 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2790440 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2790440 ']' 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.187 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:28.447 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.447 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:28.447 14:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.388 malloc0 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:29.388 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:01.514 Fuzzing completed. Shutting down the fuzz application 00:21:01.514 00:21:01.514 Dumping successful admin opcodes: 00:21:01.514 9, 10, 00:21:01.514 Dumping successful io opcodes: 00:21:01.514 0, 00:21:01.514 NS: 0x20000081ef00 I/O qp, Total commands completed: 1296102, total successful commands: 5082, random_seed: 2950184896 00:21:01.514 NS: 0x20000081ef00 admin qp, Total commands completed: 289200, total successful commands: 68, random_seed: 3243766720 00:21:01.514 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:01.514 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.514 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2790440 ']' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790440' 00:21:01.515 killing process with pid 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2790440 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:01.515 00:21:01.515 real 0m32.174s 00:21:01.515 user 0m33.709s 00:21:01.515 sys 0m26.677s 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:01.515 ************************************ 00:21:01.515 END TEST nvmf_vfio_user_fuzz 00:21:01.515 ************************************ 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.515 ************************************ 00:21:01.515 START TEST nvmf_auth_target 00:21:01.515 ************************************ 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:01.515 * Looking for test storage... 00:21:01.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.515 --rc genhtml_branch_coverage=1 00:21:01.515 --rc genhtml_function_coverage=1 00:21:01.515 --rc genhtml_legend=1 00:21:01.515 --rc geninfo_all_blocks=1 00:21:01.515 --rc geninfo_unexecuted_blocks=1 00:21:01.515 00:21:01.515 ' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.515 --rc genhtml_branch_coverage=1 00:21:01.515 --rc genhtml_function_coverage=1 00:21:01.515 --rc genhtml_legend=1 00:21:01.515 --rc geninfo_all_blocks=1 00:21:01.515 --rc geninfo_unexecuted_blocks=1 00:21:01.515 00:21:01.515 ' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.515 --rc genhtml_branch_coverage=1 00:21:01.515 --rc genhtml_function_coverage=1 00:21:01.515 --rc genhtml_legend=1 00:21:01.515 --rc geninfo_all_blocks=1 00:21:01.515 --rc geninfo_unexecuted_blocks=1 00:21:01.515 00:21:01.515 ' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:01.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.515 --rc genhtml_branch_coverage=1 00:21:01.515 --rc genhtml_function_coverage=1 00:21:01.515 --rc genhtml_legend=1 00:21:01.515 --rc geninfo_all_blocks=1 00:21:01.515 --rc geninfo_unexecuted_blocks=1 00:21:01.515 00:21:01.515 ' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.515 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.516 14:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:08.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:08.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:08.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:08.105 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.105 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:21:08.106 00:21:08.106 --- 10.0.0.2 ping statistics --- 00:21:08.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.106 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:21:08.106 00:21:08.106 --- 10.0.0.1 ping statistics --- 00:21:08.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.106 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2800411 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2800411 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2800411 ']' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.106 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2800490 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=713b009466bf8b434f54081b3f9651e5fe72e3396f9b983d 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:08.679 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wBB 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 713b009466bf8b434f54081b3f9651e5fe72e3396f9b983d 0 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 713b009466bf8b434f54081b3f9651e5fe72e3396f9b983d 0 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=713b009466bf8b434f54081b3f9651e5fe72e3396f9b983d 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:08.680 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wBB 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wBB 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wBB 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=302d9080fa99db792c992946bd379f0cbadbf8c534abd5ba9c3ac415e4c00219 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vdZ 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 302d9080fa99db792c992946bd379f0cbadbf8c534abd5ba9c3ac415e4c00219 3 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 302d9080fa99db792c992946bd379f0cbadbf8c534abd5ba9c3ac415e4c00219 3 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=302d9080fa99db792c992946bd379f0cbadbf8c534abd5ba9c3ac415e4c00219 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vdZ 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vdZ 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.vdZ 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d86286ff8b5d989f05aad908557b7eb 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.A4c 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d86286ff8b5d989f05aad908557b7eb 1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d86286ff8b5d989f05aad908557b7eb 1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d86286ff8b5d989f05aad908557b7eb 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.A4c 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.A4c 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.A4c 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9a68834d65f355c13e1261a2d881715d352feea76ef2e78d 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.47S 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9a68834d65f355c13e1261a2d881715d352feea76ef2e78d 2 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9a68834d65f355c13e1261a2d881715d352feea76ef2e78d 2 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9a68834d65f355c13e1261a2d881715d352feea76ef2e78d 00:21:08.942 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.47S 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.47S 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.47S 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:08.943 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=801f5778fd3dd07df6e0a3811b116a515064eccaf3ee22c2 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.o2r 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 801f5778fd3dd07df6e0a3811b116a515064eccaf3ee22c2 2 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 801f5778fd3dd07df6e0a3811b116a515064eccaf3ee22c2 2 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=801f5778fd3dd07df6e0a3811b116a515064eccaf3ee22c2 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.o2r 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.o2r 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.o2r 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d0841f92f6e7a83bdac8ec1247ce0b75 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1GV 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d0841f92f6e7a83bdac8ec1247ce0b75 1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d0841f92f6e7a83bdac8ec1247ce0b75 1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d0841f92f6e7a83bdac8ec1247ce0b75 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1GV 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1GV 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.1GV 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a2eea66f05c640c106ba024e54a309063b60e7bd34288ae4bdb9584e80d537e 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.u7M 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a2eea66f05c640c106ba024e54a309063b60e7bd34288ae4bdb9584e80d537e 3 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a2eea66f05c640c106ba024e54a309063b60e7bd34288ae4bdb9584e80d537e 3 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a2eea66f05c640c106ba024e54a309063b60e7bd34288ae4bdb9584e80d537e 00:21:09.205 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.u7M 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.u7M 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.u7M 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2800411 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2800411 ']' 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.206 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2800490 /var/tmp/host.sock 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2800490 ']' 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:09.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.468 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wBB 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wBB 00:21:09.730 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wBB 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.vdZ ]] 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vdZ 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vdZ 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vdZ 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A4c 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.A4c 00:21:09.994 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.A4c 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.47S ]] 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.47S 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.47S 00:21:10.256 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.47S 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.o2r 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.o2r 00:21:10.517 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.o2r 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.1GV ]] 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1GV 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1GV 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1GV 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.u7M 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.u7M 00:21:10.779 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.u7M 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:11.040 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.302 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.564 00:21:11.564 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.564 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.564 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.564 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.564 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.564 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.564 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.826 { 00:21:11.826 "cntlid": 1, 00:21:11.826 "qid": 0, 00:21:11.826 "state": "enabled", 00:21:11.826 "thread": "nvmf_tgt_poll_group_000", 00:21:11.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.826 "listen_address": { 00:21:11.826 "trtype": "TCP", 00:21:11.826 "adrfam": "IPv4", 00:21:11.826 "traddr": "10.0.0.2", 00:21:11.826 "trsvcid": "4420" 00:21:11.826 }, 00:21:11.826 "peer_address": { 00:21:11.826 "trtype": "TCP", 00:21:11.826 "adrfam": "IPv4", 00:21:11.826 "traddr": "10.0.0.1", 00:21:11.826 "trsvcid": "56588" 00:21:11.826 }, 00:21:11.826 "auth": { 00:21:11.826 "state": "completed", 00:21:11.826 "digest": "sha256", 00:21:11.826 "dhgroup": "null" 00:21:11.826 } 00:21:11.826 } 00:21:11.826 ]' 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.826 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.087 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:12.087 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.660 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.921 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.921 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.182 { 00:21:13.182 "cntlid": 3, 00:21:13.182 "qid": 0, 00:21:13.182 "state": "enabled", 00:21:13.182 "thread": "nvmf_tgt_poll_group_000", 00:21:13.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:13.182 "listen_address": { 00:21:13.182 "trtype": "TCP", 00:21:13.182 "adrfam": "IPv4", 00:21:13.182 "traddr": "10.0.0.2", 00:21:13.182 "trsvcid": "4420" 00:21:13.182 }, 00:21:13.182 "peer_address": { 00:21:13.182 "trtype": "TCP", 00:21:13.182 "adrfam": "IPv4", 00:21:13.182 "traddr": "10.0.0.1", 00:21:13.182 "trsvcid": "56614" 00:21:13.182 }, 00:21:13.182 "auth": { 00:21:13.182 "state": "completed", 00:21:13.182 "digest": "sha256", 00:21:13.182 "dhgroup": "null" 00:21:13.182 } 00:21:13.182 } 00:21:13.182 ]' 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.182 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.442 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.442 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.442 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.442 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.442 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.703 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:13.703 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.276 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.536 00:21:14.536 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.536 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.536 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.797 { 00:21:14.797 "cntlid": 5, 00:21:14.797 "qid": 0, 00:21:14.797 "state": "enabled", 00:21:14.797 "thread": "nvmf_tgt_poll_group_000", 00:21:14.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.797 "listen_address": { 00:21:14.797 "trtype": "TCP", 00:21:14.797 "adrfam": "IPv4", 00:21:14.797 "traddr": "10.0.0.2", 00:21:14.797 "trsvcid": "4420" 00:21:14.797 }, 00:21:14.797 "peer_address": { 00:21:14.797 "trtype": "TCP", 00:21:14.797 "adrfam": "IPv4", 00:21:14.797 "traddr": "10.0.0.1", 00:21:14.797 "trsvcid": "56638" 00:21:14.797 }, 00:21:14.797 "auth": { 00:21:14.797 "state": "completed", 00:21:14.797 "digest": "sha256", 00:21:14.797 "dhgroup": "null" 00:21:14.797 } 00:21:14.797 } 00:21:14.797 ]' 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:14.797 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.058 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.058 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.058 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.058 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:15.058 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:15.629 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.889 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.150 00:21:16.150 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.150 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.150 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.410 { 00:21:16.410 "cntlid": 7, 00:21:16.410 "qid": 0, 00:21:16.410 "state": "enabled", 00:21:16.410 "thread": "nvmf_tgt_poll_group_000", 00:21:16.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.410 "listen_address": { 00:21:16.410 "trtype": "TCP", 00:21:16.410 "adrfam": "IPv4", 00:21:16.410 "traddr": "10.0.0.2", 00:21:16.410 "trsvcid": "4420" 00:21:16.410 }, 00:21:16.410 "peer_address": { 00:21:16.410 "trtype": "TCP", 00:21:16.410 "adrfam": "IPv4", 00:21:16.410 "traddr": "10.0.0.1", 00:21:16.410 "trsvcid": "56662" 00:21:16.410 }, 00:21:16.410 "auth": { 00:21:16.410 "state": "completed", 00:21:16.410 "digest": "sha256", 00:21:16.410 "dhgroup": "null" 00:21:16.410 } 00:21:16.410 } 00:21:16.410 ]' 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.410 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.411 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.411 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.411 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.411 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.411 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.670 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:16.670 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.240 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.500 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.762 00:21:17.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.762 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.763 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.763 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.763 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.024 { 00:21:18.024 "cntlid": 9, 00:21:18.024 "qid": 0, 00:21:18.024 "state": "enabled", 00:21:18.024 "thread": "nvmf_tgt_poll_group_000", 00:21:18.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:18.024 "listen_address": { 00:21:18.024 "trtype": "TCP", 00:21:18.024 "adrfam": "IPv4", 00:21:18.024 "traddr": "10.0.0.2", 00:21:18.024 "trsvcid": "4420" 00:21:18.024 }, 00:21:18.024 "peer_address": { 00:21:18.024 "trtype": "TCP", 00:21:18.024 "adrfam": "IPv4", 00:21:18.024 "traddr": "10.0.0.1", 00:21:18.024 "trsvcid": "56690" 00:21:18.024 }, 00:21:18.024 "auth": { 00:21:18.024 "state": "completed", 00:21:18.024 "digest": "sha256", 00:21:18.024 "dhgroup": "ffdhe2048" 00:21:18.024 } 00:21:18.024 } 00:21:18.024 ]' 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.024 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.286 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:18.286 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:18.864 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.169 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.169 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.484 { 00:21:19.484 "cntlid": 11, 00:21:19.484 "qid": 0, 00:21:19.484 "state": "enabled", 00:21:19.484 "thread": "nvmf_tgt_poll_group_000", 00:21:19.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.484 "listen_address": { 00:21:19.484 "trtype": "TCP", 00:21:19.484 "adrfam": "IPv4", 00:21:19.484 "traddr": "10.0.0.2", 00:21:19.484 "trsvcid": "4420" 00:21:19.484 }, 00:21:19.484 "peer_address": { 00:21:19.484 "trtype": "TCP", 00:21:19.484 "adrfam": "IPv4", 00:21:19.484 "traddr": "10.0.0.1", 00:21:19.484 "trsvcid": "36840" 00:21:19.484 }, 00:21:19.484 "auth": { 00:21:19.484 "state": "completed", 00:21:19.484 "digest": "sha256", 00:21:19.484 "dhgroup": "ffdhe2048" 00:21:19.484 } 00:21:19.484 } 00:21:19.484 ]' 00:21:19.484 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.484 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.773 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:19.773 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:20.344 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.605 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.865 00:21:20.865 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.865 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.865 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.865 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.866 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.866 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.866 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.126 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.126 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.126 { 00:21:21.126 "cntlid": 13, 00:21:21.126 "qid": 0, 00:21:21.126 "state": "enabled", 00:21:21.126 "thread": "nvmf_tgt_poll_group_000", 00:21:21.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.127 "listen_address": { 00:21:21.127 "trtype": "TCP", 00:21:21.127 "adrfam": "IPv4", 00:21:21.127 "traddr": "10.0.0.2", 00:21:21.127 "trsvcid": "4420" 00:21:21.127 }, 00:21:21.127 "peer_address": { 00:21:21.127 "trtype": "TCP", 00:21:21.127 "adrfam": "IPv4", 00:21:21.127 "traddr": "10.0.0.1", 00:21:21.127 "trsvcid": "36860" 00:21:21.127 }, 00:21:21.127 "auth": { 00:21:21.127 "state": "completed", 00:21:21.127 "digest": "sha256", 00:21:21.127 "dhgroup": "ffdhe2048" 00:21:21.127 } 00:21:21.127 } 00:21:21.127 ]' 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.127 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.387 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:21.387 14:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.956 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.216 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.216 00:21:22.475 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.475 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.475 14:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.475 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.475 { 00:21:22.475 "cntlid": 15, 00:21:22.475 "qid": 0, 00:21:22.475 "state": "enabled", 00:21:22.476 "thread": "nvmf_tgt_poll_group_000", 00:21:22.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:22.476 "listen_address": { 00:21:22.476 "trtype": "TCP", 00:21:22.476 "adrfam": "IPv4", 00:21:22.476 "traddr": "10.0.0.2", 00:21:22.476 "trsvcid": "4420" 00:21:22.476 }, 00:21:22.476 "peer_address": { 00:21:22.476 "trtype": "TCP", 00:21:22.476 "adrfam": "IPv4", 00:21:22.476 "traddr": "10.0.0.1", 00:21:22.476 "trsvcid": "36878" 00:21:22.476 }, 00:21:22.476 "auth": { 00:21:22.476 "state": "completed", 00:21:22.476 "digest": "sha256", 00:21:22.476 "dhgroup": "ffdhe2048" 00:21:22.476 } 00:21:22.476 } 00:21:22.476 ]' 00:21:22.476 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.476 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.476 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.735 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.735 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.735 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.735 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.735 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.994 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:22.994 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:23.563 14:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.563 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.564 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.822 00:21:23.822 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.822 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.822 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.080 { 00:21:24.080 "cntlid": 17, 00:21:24.080 "qid": 0, 00:21:24.080 "state": "enabled", 00:21:24.080 "thread": "nvmf_tgt_poll_group_000", 00:21:24.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.080 "listen_address": { 00:21:24.080 "trtype": "TCP", 00:21:24.080 "adrfam": "IPv4", 00:21:24.080 "traddr": "10.0.0.2", 00:21:24.080 "trsvcid": "4420" 00:21:24.080 }, 00:21:24.080 "peer_address": { 00:21:24.080 "trtype": "TCP", 00:21:24.080 "adrfam": "IPv4", 00:21:24.080 "traddr": "10.0.0.1", 00:21:24.080 "trsvcid": "36916" 00:21:24.080 }, 00:21:24.080 "auth": { 00:21:24.080 "state": "completed", 00:21:24.080 "digest": "sha256", 00:21:24.080 "dhgroup": "ffdhe3072" 00:21:24.080 } 00:21:24.080 } 00:21:24.080 ]' 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.080 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:24.340 14:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:24.910 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.170 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.430 00:21:25.430 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.430 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.430 14:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.690 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.690 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.691 { 00:21:25.691 "cntlid": 19, 00:21:25.691 "qid": 0, 00:21:25.691 "state": "enabled", 00:21:25.691 "thread": "nvmf_tgt_poll_group_000", 00:21:25.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:25.691 "listen_address": { 00:21:25.691 "trtype": "TCP", 00:21:25.691 "adrfam": "IPv4", 00:21:25.691 "traddr": "10.0.0.2", 00:21:25.691 "trsvcid": "4420" 00:21:25.691 }, 00:21:25.691 "peer_address": { 00:21:25.691 "trtype": "TCP", 00:21:25.691 "adrfam": "IPv4", 00:21:25.691 "traddr": "10.0.0.1", 00:21:25.691 "trsvcid": "36938" 00:21:25.691 }, 00:21:25.691 "auth": { 00:21:25.691 "state": "completed", 00:21:25.691 "digest": "sha256", 00:21:25.691 "dhgroup": "ffdhe3072" 00:21:25.691 } 00:21:25.691 } 00:21:25.691 ]' 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.691 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.951 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:25.951 14:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.525 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:26.526 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.792 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.052 00:21:27.052 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.052 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.052 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.312 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.312 { 00:21:27.312 "cntlid": 21, 00:21:27.312 "qid": 0, 00:21:27.312 "state": "enabled", 00:21:27.312 "thread": "nvmf_tgt_poll_group_000", 00:21:27.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:27.312 "listen_address": { 00:21:27.312 "trtype": "TCP", 00:21:27.312 "adrfam": "IPv4", 00:21:27.312 "traddr": "10.0.0.2", 00:21:27.312 "trsvcid": "4420" 00:21:27.312 }, 00:21:27.312 "peer_address": { 00:21:27.312 "trtype": "TCP", 00:21:27.312 "adrfam": "IPv4", 00:21:27.312 "traddr": "10.0.0.1", 00:21:27.312 "trsvcid": "36960" 00:21:27.312 }, 00:21:27.312 "auth": { 00:21:27.312 "state": "completed", 00:21:27.313 "digest": "sha256", 00:21:27.313 "dhgroup": "ffdhe3072" 00:21:27.313 } 00:21:27.313 } 00:21:27.313 ]' 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.313 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.572 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:27.572 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.144 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.404 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.665 00:21:28.665 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.665 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.665 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.927 { 00:21:28.927 "cntlid": 23, 00:21:28.927 "qid": 0, 00:21:28.927 "state": "enabled", 00:21:28.927 "thread": "nvmf_tgt_poll_group_000", 00:21:28.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:28.927 "listen_address": { 00:21:28.927 "trtype": "TCP", 00:21:28.927 "adrfam": "IPv4", 00:21:28.927 "traddr": "10.0.0.2", 00:21:28.927 "trsvcid": "4420" 00:21:28.927 }, 00:21:28.927 "peer_address": { 00:21:28.927 "trtype": "TCP", 00:21:28.927 "adrfam": "IPv4", 00:21:28.927 "traddr": "10.0.0.1", 00:21:28.927 "trsvcid": "38938" 00:21:28.927 }, 00:21:28.927 "auth": { 00:21:28.927 "state": "completed", 00:21:28.927 "digest": "sha256", 00:21:28.927 "dhgroup": "ffdhe3072" 00:21:28.927 } 00:21:28.927 } 00:21:28.927 ]' 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.927 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.187 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:29.187 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:29.757 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:29.758 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.018 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.278 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.278 { 00:21:30.278 "cntlid": 25, 00:21:30.278 "qid": 0, 00:21:30.278 "state": "enabled", 00:21:30.278 "thread": "nvmf_tgt_poll_group_000", 00:21:30.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:30.278 "listen_address": { 00:21:30.278 "trtype": "TCP", 00:21:30.278 "adrfam": "IPv4", 00:21:30.278 "traddr": "10.0.0.2", 00:21:30.278 "trsvcid": "4420" 00:21:30.278 }, 00:21:30.278 "peer_address": { 00:21:30.278 "trtype": "TCP", 00:21:30.278 "adrfam": "IPv4", 00:21:30.278 "traddr": "10.0.0.1", 00:21:30.278 "trsvcid": "38970" 00:21:30.278 }, 00:21:30.278 "auth": { 00:21:30.278 "state": "completed", 00:21:30.278 "digest": "sha256", 00:21:30.278 "dhgroup": "ffdhe4096" 00:21:30.278 } 00:21:30.278 } 00:21:30.278 ]' 00:21:30.278 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.539 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.539 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.539 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.539 14:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.539 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.539 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.539 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.798 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:30.798 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.369 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.370 14:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.629 00:21:31.629 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.629 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.629 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.890 { 00:21:31.890 "cntlid": 27, 00:21:31.890 "qid": 0, 00:21:31.890 "state": "enabled", 00:21:31.890 "thread": "nvmf_tgt_poll_group_000", 00:21:31.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.890 "listen_address": { 00:21:31.890 "trtype": "TCP", 00:21:31.890 "adrfam": "IPv4", 00:21:31.890 "traddr": "10.0.0.2", 00:21:31.890 "trsvcid": "4420" 00:21:31.890 }, 00:21:31.890 "peer_address": { 00:21:31.890 "trtype": "TCP", 00:21:31.890 "adrfam": "IPv4", 00:21:31.890 "traddr": "10.0.0.1", 00:21:31.890 "trsvcid": "39012" 00:21:31.890 }, 00:21:31.890 "auth": { 00:21:31.890 "state": "completed", 00:21:31.890 "digest": "sha256", 00:21:31.890 "dhgroup": "ffdhe4096" 00:21:31.890 } 00:21:31.890 } 00:21:31.890 ]' 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.890 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:32.151 14:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:32.721 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.721 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.721 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.721 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.981 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.240 00:21:33.240 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.240 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.240 14:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.500 { 00:21:33.500 "cntlid": 29, 00:21:33.500 "qid": 0, 00:21:33.500 "state": "enabled", 00:21:33.500 "thread": "nvmf_tgt_poll_group_000", 00:21:33.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.500 "listen_address": { 00:21:33.500 "trtype": "TCP", 00:21:33.500 "adrfam": "IPv4", 00:21:33.500 "traddr": "10.0.0.2", 00:21:33.500 "trsvcid": "4420" 00:21:33.500 }, 00:21:33.500 "peer_address": { 00:21:33.500 "trtype": "TCP", 00:21:33.500 "adrfam": "IPv4", 00:21:33.500 "traddr": "10.0.0.1", 00:21:33.500 "trsvcid": "39038" 00:21:33.500 }, 00:21:33.500 "auth": { 00:21:33.500 "state": "completed", 00:21:33.500 "digest": "sha256", 00:21:33.500 "dhgroup": "ffdhe4096" 00:21:33.500 } 00:21:33.500 } 00:21:33.500 ]' 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.500 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.759 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.760 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.760 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.760 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:33.760 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:34.329 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.329 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.330 14:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.589 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.850 00:21:34.850 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.850 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.850 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.110 { 00:21:35.110 "cntlid": 31, 00:21:35.110 "qid": 0, 00:21:35.110 "state": "enabled", 00:21:35.110 "thread": "nvmf_tgt_poll_group_000", 00:21:35.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:35.110 "listen_address": { 00:21:35.110 "trtype": "TCP", 00:21:35.110 "adrfam": "IPv4", 00:21:35.110 "traddr": "10.0.0.2", 00:21:35.110 "trsvcid": "4420" 00:21:35.110 }, 00:21:35.110 "peer_address": { 00:21:35.110 "trtype": "TCP", 00:21:35.110 "adrfam": "IPv4", 00:21:35.110 "traddr": "10.0.0.1", 00:21:35.110 "trsvcid": "39060" 00:21:35.110 }, 00:21:35.110 "auth": { 00:21:35.110 "state": "completed", 00:21:35.110 "digest": "sha256", 00:21:35.110 "dhgroup": "ffdhe4096" 00:21:35.110 } 00:21:35.110 } 00:21:35.110 ]' 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.110 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.370 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:35.370 14:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:35.941 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.201 14:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.462 00:21:36.462 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.462 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.462 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.723 { 00:21:36.723 "cntlid": 33, 00:21:36.723 "qid": 0, 00:21:36.723 "state": "enabled", 00:21:36.723 "thread": "nvmf_tgt_poll_group_000", 00:21:36.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.723 "listen_address": { 00:21:36.723 "trtype": "TCP", 00:21:36.723 "adrfam": "IPv4", 00:21:36.723 "traddr": "10.0.0.2", 00:21:36.723 "trsvcid": "4420" 00:21:36.723 }, 00:21:36.723 "peer_address": { 00:21:36.723 "trtype": "TCP", 00:21:36.723 "adrfam": "IPv4", 00:21:36.723 "traddr": "10.0.0.1", 00:21:36.723 "trsvcid": "39090" 00:21:36.723 }, 00:21:36.723 "auth": { 00:21:36.723 "state": "completed", 00:21:36.723 "digest": "sha256", 00:21:36.723 "dhgroup": "ffdhe6144" 00:21:36.723 } 00:21:36.723 } 00:21:36.723 ]' 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.723 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.983 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.983 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.983 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.983 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:36.983 14:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:37.554 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.815 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.077 00:21:38.077 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.077 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.077 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.338 { 00:21:38.338 "cntlid": 35, 00:21:38.338 "qid": 0, 00:21:38.338 "state": "enabled", 00:21:38.338 "thread": "nvmf_tgt_poll_group_000", 00:21:38.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.338 "listen_address": { 00:21:38.338 "trtype": "TCP", 00:21:38.338 "adrfam": "IPv4", 00:21:38.338 "traddr": "10.0.0.2", 00:21:38.338 "trsvcid": "4420" 00:21:38.338 }, 00:21:38.338 "peer_address": { 00:21:38.338 "trtype": "TCP", 00:21:38.338 "adrfam": "IPv4", 00:21:38.338 "traddr": "10.0.0.1", 00:21:38.338 "trsvcid": "39114" 00:21:38.338 }, 00:21:38.338 "auth": { 00:21:38.338 "state": "completed", 00:21:38.338 "digest": "sha256", 00:21:38.338 "dhgroup": "ffdhe6144" 00:21:38.338 } 00:21:38.338 } 00:21:38.338 ]' 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.338 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.600 14:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.600 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.600 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.600 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:38.600 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:39.169 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.429 14:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.690 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.949 { 00:21:39.949 "cntlid": 37, 00:21:39.949 "qid": 0, 00:21:39.949 "state": "enabled", 00:21:39.949 "thread": "nvmf_tgt_poll_group_000", 00:21:39.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.949 "listen_address": { 00:21:39.949 "trtype": "TCP", 00:21:39.949 "adrfam": "IPv4", 00:21:39.949 "traddr": "10.0.0.2", 00:21:39.949 "trsvcid": "4420" 00:21:39.949 }, 00:21:39.949 "peer_address": { 00:21:39.949 "trtype": "TCP", 00:21:39.949 "adrfam": "IPv4", 00:21:39.949 "traddr": "10.0.0.1", 00:21:39.949 "trsvcid": "53206" 00:21:39.949 }, 00:21:39.949 "auth": { 00:21:39.949 "state": "completed", 00:21:39.949 "digest": "sha256", 00:21:39.949 "dhgroup": "ffdhe6144" 00:21:39.949 } 00:21:39.949 } 00:21:39.949 ]' 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.949 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:40.209 14:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:40.779 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.040 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.611 00:21:41.611 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.611 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.611 14:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.611 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.611 { 00:21:41.611 "cntlid": 39, 00:21:41.611 "qid": 0, 00:21:41.611 "state": "enabled", 00:21:41.611 "thread": "nvmf_tgt_poll_group_000", 00:21:41.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.611 "listen_address": { 00:21:41.611 "trtype": "TCP", 00:21:41.611 "adrfam": "IPv4", 00:21:41.611 "traddr": "10.0.0.2", 00:21:41.611 "trsvcid": "4420" 00:21:41.612 }, 00:21:41.612 "peer_address": { 00:21:41.612 "trtype": "TCP", 00:21:41.612 "adrfam": "IPv4", 00:21:41.612 "traddr": "10.0.0.1", 00:21:41.612 "trsvcid": "53238" 00:21:41.612 }, 00:21:41.612 "auth": { 00:21:41.612 "state": "completed", 00:21:41.612 "digest": "sha256", 00:21:41.612 "dhgroup": "ffdhe6144" 00:21:41.612 } 00:21:41.612 } 00:21:41.612 ]' 00:21:41.612 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.612 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.612 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:41.872 14:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:42.441 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.703 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.274 00:21:43.274 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.274 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.274 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.534 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.534 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.534 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.534 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.534 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.535 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.535 { 00:21:43.535 "cntlid": 41, 00:21:43.535 "qid": 0, 00:21:43.535 "state": "enabled", 00:21:43.535 "thread": "nvmf_tgt_poll_group_000", 00:21:43.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.535 "listen_address": { 00:21:43.535 "trtype": "TCP", 00:21:43.535 "adrfam": "IPv4", 00:21:43.535 "traddr": "10.0.0.2", 00:21:43.535 "trsvcid": "4420" 00:21:43.535 }, 00:21:43.535 "peer_address": { 00:21:43.535 "trtype": "TCP", 00:21:43.535 "adrfam": "IPv4", 00:21:43.535 "traddr": "10.0.0.1", 00:21:43.535 "trsvcid": "53264" 00:21:43.535 }, 00:21:43.535 "auth": { 00:21:43.535 "state": "completed", 00:21:43.535 "digest": "sha256", 00:21:43.535 "dhgroup": "ffdhe8192" 00:21:43.535 } 00:21:43.535 } 00:21:43.535 ]' 00:21:43.535 14:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.535 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.796 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:43.796 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:44.367 14:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.627 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.888 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.149 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.149 { 00:21:45.149 "cntlid": 43, 00:21:45.149 "qid": 0, 00:21:45.149 "state": "enabled", 00:21:45.149 "thread": "nvmf_tgt_poll_group_000", 00:21:45.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.149 "listen_address": { 00:21:45.149 "trtype": "TCP", 00:21:45.149 "adrfam": "IPv4", 00:21:45.149 "traddr": "10.0.0.2", 00:21:45.149 "trsvcid": "4420" 00:21:45.149 }, 00:21:45.149 "peer_address": { 00:21:45.149 "trtype": "TCP", 00:21:45.149 "adrfam": "IPv4", 00:21:45.149 "traddr": "10.0.0.1", 00:21:45.149 "trsvcid": "53288" 00:21:45.149 }, 00:21:45.149 "auth": { 00:21:45.149 "state": "completed", 00:21:45.149 "digest": "sha256", 00:21:45.150 "dhgroup": "ffdhe8192" 00:21:45.150 } 00:21:45.150 } 00:21:45.150 ]' 00:21:45.150 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.150 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.150 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.410 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.410 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.410 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.410 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.410 14:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.670 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:45.670 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.241 14:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.811 00:21:46.811 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.811 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.811 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.072 { 00:21:47.072 "cntlid": 45, 00:21:47.072 "qid": 0, 00:21:47.072 "state": "enabled", 00:21:47.072 "thread": "nvmf_tgt_poll_group_000", 00:21:47.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:47.072 "listen_address": { 00:21:47.072 "trtype": "TCP", 00:21:47.072 "adrfam": "IPv4", 00:21:47.072 "traddr": "10.0.0.2", 00:21:47.072 "trsvcid": "4420" 00:21:47.072 }, 00:21:47.072 "peer_address": { 00:21:47.072 "trtype": "TCP", 00:21:47.072 "adrfam": "IPv4", 00:21:47.072 "traddr": "10.0.0.1", 00:21:47.072 "trsvcid": "53308" 00:21:47.072 }, 00:21:47.072 "auth": { 00:21:47.072 "state": "completed", 00:21:47.072 "digest": "sha256", 00:21:47.072 "dhgroup": "ffdhe8192" 00:21:47.072 } 00:21:47.072 } 00:21:47.072 ]' 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.072 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.332 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:47.332 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.901 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.902 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:47.902 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.163 14:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:48.734 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.735 { 00:21:48.735 "cntlid": 47, 00:21:48.735 "qid": 0, 00:21:48.735 "state": "enabled", 00:21:48.735 "thread": "nvmf_tgt_poll_group_000", 00:21:48.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:48.735 "listen_address": { 00:21:48.735 "trtype": "TCP", 00:21:48.735 "adrfam": "IPv4", 00:21:48.735 "traddr": "10.0.0.2", 00:21:48.735 "trsvcid": "4420" 00:21:48.735 }, 00:21:48.735 "peer_address": { 00:21:48.735 "trtype": "TCP", 00:21:48.735 "adrfam": "IPv4", 00:21:48.735 "traddr": "10.0.0.1", 00:21:48.735 "trsvcid": "45104" 00:21:48.735 }, 00:21:48.735 "auth": { 00:21:48.735 "state": "completed", 00:21:48.735 "digest": "sha256", 00:21:48.735 "dhgroup": "ffdhe8192" 00:21:48.735 } 00:21:48.735 } 00:21:48.735 ]' 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.735 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.996 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.997 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.997 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.997 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.997 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.997 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.257 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:49.257 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.828 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.088 00:21:50.088 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.088 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.088 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.349 { 00:21:50.349 "cntlid": 49, 00:21:50.349 "qid": 0, 00:21:50.349 "state": "enabled", 00:21:50.349 "thread": "nvmf_tgt_poll_group_000", 00:21:50.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:50.349 "listen_address": { 00:21:50.349 "trtype": "TCP", 00:21:50.349 "adrfam": "IPv4", 00:21:50.349 "traddr": "10.0.0.2", 00:21:50.349 "trsvcid": "4420" 00:21:50.349 }, 00:21:50.349 "peer_address": { 00:21:50.349 "trtype": "TCP", 00:21:50.349 "adrfam": "IPv4", 00:21:50.349 "traddr": "10.0.0.1", 00:21:50.349 "trsvcid": "45110" 00:21:50.349 }, 00:21:50.349 "auth": { 00:21:50.349 "state": "completed", 00:21:50.349 "digest": "sha384", 00:21:50.349 "dhgroup": "null" 00:21:50.349 } 00:21:50.349 } 00:21:50.349 ]' 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:50.349 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.609 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.609 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.609 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.609 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:50.609 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:51.178 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.437 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.697 00:21:51.697 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.697 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.697 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.955 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.955 { 00:21:51.955 "cntlid": 51, 00:21:51.955 "qid": 0, 00:21:51.955 "state": "enabled", 00:21:51.955 "thread": "nvmf_tgt_poll_group_000", 00:21:51.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.955 "listen_address": { 00:21:51.956 "trtype": "TCP", 00:21:51.956 "adrfam": "IPv4", 00:21:51.956 "traddr": "10.0.0.2", 00:21:51.956 "trsvcid": "4420" 00:21:51.956 }, 00:21:51.956 "peer_address": { 00:21:51.956 "trtype": "TCP", 00:21:51.956 "adrfam": "IPv4", 00:21:51.956 "traddr": "10.0.0.1", 00:21:51.956 "trsvcid": "45150" 00:21:51.956 }, 00:21:51.956 "auth": { 00:21:51.956 "state": "completed", 00:21:51.956 "digest": "sha384", 00:21:51.956 "dhgroup": "null" 00:21:51.956 } 00:21:51.956 } 00:21:51.956 ]' 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.956 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.215 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:52.215 14:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:52.785 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.044 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.045 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.305 00:21:53.305 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.305 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.305 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.565 { 00:21:53.565 "cntlid": 53, 00:21:53.565 "qid": 0, 00:21:53.565 "state": "enabled", 00:21:53.565 "thread": "nvmf_tgt_poll_group_000", 00:21:53.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:53.565 "listen_address": { 00:21:53.565 "trtype": "TCP", 00:21:53.565 "adrfam": "IPv4", 00:21:53.565 "traddr": "10.0.0.2", 00:21:53.565 "trsvcid": "4420" 00:21:53.565 }, 00:21:53.565 "peer_address": { 00:21:53.565 "trtype": "TCP", 00:21:53.565 "adrfam": "IPv4", 00:21:53.565 "traddr": "10.0.0.1", 00:21:53.565 "trsvcid": "45168" 00:21:53.565 }, 00:21:53.565 "auth": { 00:21:53.565 "state": "completed", 00:21:53.565 "digest": "sha384", 00:21:53.565 "dhgroup": "null" 00:21:53.565 } 00:21:53.565 } 00:21:53.565 ]' 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.565 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.566 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:53.566 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.566 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.566 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.566 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.825 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:53.825 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:54.396 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.655 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.656 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.915 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.915 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.916 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.916 { 00:21:54.916 "cntlid": 55, 00:21:54.916 "qid": 0, 00:21:54.916 "state": "enabled", 00:21:54.916 "thread": "nvmf_tgt_poll_group_000", 00:21:54.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.916 "listen_address": { 00:21:54.916 "trtype": "TCP", 00:21:54.916 "adrfam": "IPv4", 00:21:54.916 "traddr": "10.0.0.2", 00:21:54.916 "trsvcid": "4420" 00:21:54.916 }, 00:21:54.916 "peer_address": { 00:21:54.916 "trtype": "TCP", 00:21:54.916 "adrfam": "IPv4", 00:21:54.916 "traddr": "10.0.0.1", 00:21:54.916 "trsvcid": "45202" 00:21:54.916 }, 00:21:54.916 "auth": { 00:21:54.916 "state": "completed", 00:21:54.916 "digest": "sha384", 00:21:54.916 "dhgroup": "null" 00:21:54.916 } 00:21:54.916 } 00:21:54.916 ]' 00:21:54.916 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.176 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.437 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:55.437 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.010 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.011 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.011 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.011 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.272 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.272 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.557 { 00:21:56.557 "cntlid": 57, 00:21:56.557 "qid": 0, 00:21:56.557 "state": "enabled", 00:21:56.557 "thread": "nvmf_tgt_poll_group_000", 00:21:56.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.557 "listen_address": { 00:21:56.557 "trtype": "TCP", 00:21:56.557 "adrfam": "IPv4", 00:21:56.557 "traddr": "10.0.0.2", 00:21:56.557 "trsvcid": "4420" 00:21:56.557 }, 00:21:56.557 "peer_address": { 00:21:56.557 "trtype": "TCP", 00:21:56.557 "adrfam": "IPv4", 00:21:56.557 "traddr": "10.0.0.1", 00:21:56.557 "trsvcid": "45210" 00:21:56.557 }, 00:21:56.557 "auth": { 00:21:56.557 "state": "completed", 00:21:56.557 "digest": "sha384", 00:21:56.557 "dhgroup": "ffdhe2048" 00:21:56.557 } 00:21:56.557 } 00:21:56.557 ]' 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.557 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.838 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.839 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.839 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.839 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:56.839 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:57.460 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.721 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.981 00:21:57.981 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.981 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.981 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.242 { 00:21:58.242 "cntlid": 59, 00:21:58.242 "qid": 0, 00:21:58.242 "state": "enabled", 00:21:58.242 "thread": "nvmf_tgt_poll_group_000", 00:21:58.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:58.242 "listen_address": { 00:21:58.242 "trtype": "TCP", 00:21:58.242 "adrfam": "IPv4", 00:21:58.242 "traddr": "10.0.0.2", 00:21:58.242 "trsvcid": "4420" 00:21:58.242 }, 00:21:58.242 "peer_address": { 00:21:58.242 "trtype": "TCP", 00:21:58.242 "adrfam": "IPv4", 00:21:58.242 "traddr": "10.0.0.1", 00:21:58.242 "trsvcid": "45242" 00:21:58.242 }, 00:21:58.242 "auth": { 00:21:58.242 "state": "completed", 00:21:58.242 "digest": "sha384", 00:21:58.242 "dhgroup": "ffdhe2048" 00:21:58.242 } 00:21:58.242 } 00:21:58.242 ]' 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.242 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.501 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:58.501 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:59.072 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.332 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.592 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.592 { 00:21:59.592 "cntlid": 61, 00:21:59.592 "qid": 0, 00:21:59.592 "state": "enabled", 00:21:59.592 "thread": "nvmf_tgt_poll_group_000", 00:21:59.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.592 "listen_address": { 00:21:59.592 "trtype": "TCP", 00:21:59.592 "adrfam": "IPv4", 00:21:59.592 "traddr": "10.0.0.2", 00:21:59.592 "trsvcid": "4420" 00:21:59.592 }, 00:21:59.592 "peer_address": { 00:21:59.592 "trtype": "TCP", 00:21:59.592 "adrfam": "IPv4", 00:21:59.592 "traddr": "10.0.0.1", 00:21:59.592 "trsvcid": "54212" 00:21:59.592 }, 00:21:59.592 "auth": { 00:21:59.592 "state": "completed", 00:21:59.592 "digest": "sha384", 00:21:59.592 "dhgroup": "ffdhe2048" 00:21:59.592 } 00:21:59.592 } 00:21:59.592 ]' 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.592 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.851 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:59.851 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.851 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.851 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.851 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.111 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:00.111 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.681 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.940 00:22:00.940 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.940 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.940 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.200 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.200 { 00:22:01.200 "cntlid": 63, 00:22:01.200 "qid": 0, 00:22:01.200 "state": "enabled", 00:22:01.200 "thread": "nvmf_tgt_poll_group_000", 00:22:01.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.200 "listen_address": { 00:22:01.200 "trtype": "TCP", 00:22:01.200 "adrfam": "IPv4", 00:22:01.200 "traddr": "10.0.0.2", 00:22:01.200 "trsvcid": "4420" 00:22:01.200 }, 00:22:01.200 "peer_address": { 00:22:01.200 "trtype": "TCP", 00:22:01.200 "adrfam": "IPv4", 00:22:01.200 "traddr": "10.0.0.1", 00:22:01.200 "trsvcid": "54240" 00:22:01.200 }, 00:22:01.200 "auth": { 00:22:01.201 "state": "completed", 00:22:01.201 "digest": "sha384", 00:22:01.201 "dhgroup": "ffdhe2048" 00:22:01.201 } 00:22:01.201 } 00:22:01.201 ]' 00:22:01.201 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.201 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.201 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.201 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:01.460 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.460 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.460 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.460 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.460 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:01.460 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.030 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.290 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.549 00:22:02.549 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.549 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.549 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.808 { 00:22:02.808 "cntlid": 65, 00:22:02.808 "qid": 0, 00:22:02.808 "state": "enabled", 00:22:02.808 "thread": "nvmf_tgt_poll_group_000", 00:22:02.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:02.808 "listen_address": { 00:22:02.808 "trtype": "TCP", 00:22:02.808 "adrfam": "IPv4", 00:22:02.808 "traddr": "10.0.0.2", 00:22:02.808 "trsvcid": "4420" 00:22:02.808 }, 00:22:02.808 "peer_address": { 00:22:02.808 "trtype": "TCP", 00:22:02.808 "adrfam": "IPv4", 00:22:02.808 "traddr": "10.0.0.1", 00:22:02.808 "trsvcid": "54262" 00:22:02.808 }, 00:22:02.808 "auth": { 00:22:02.808 "state": "completed", 00:22:02.808 "digest": "sha384", 00:22:02.808 "dhgroup": "ffdhe3072" 00:22:02.808 } 00:22:02.808 } 00:22:02.808 ]' 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.808 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.075 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:03.075 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:03.644 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.904 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.164 00:22:04.164 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.164 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.164 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.424 { 00:22:04.424 "cntlid": 67, 00:22:04.424 "qid": 0, 00:22:04.424 "state": "enabled", 00:22:04.424 "thread": "nvmf_tgt_poll_group_000", 00:22:04.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.424 "listen_address": { 00:22:04.424 "trtype": "TCP", 00:22:04.424 "adrfam": "IPv4", 00:22:04.424 "traddr": "10.0.0.2", 00:22:04.424 "trsvcid": "4420" 00:22:04.424 }, 00:22:04.424 "peer_address": { 00:22:04.424 "trtype": "TCP", 00:22:04.424 "adrfam": "IPv4", 00:22:04.424 "traddr": "10.0.0.1", 00:22:04.424 "trsvcid": "54292" 00:22:04.424 }, 00:22:04.424 "auth": { 00:22:04.424 "state": "completed", 00:22:04.424 "digest": "sha384", 00:22:04.424 "dhgroup": "ffdhe3072" 00:22:04.424 } 00:22:04.424 } 00:22:04.424 ]' 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.424 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.424 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.684 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:04.684 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:05.253 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.514 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.515 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.775 00:22:05.775 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.775 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.775 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.035 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.035 { 00:22:06.035 "cntlid": 69, 00:22:06.035 "qid": 0, 00:22:06.035 "state": "enabled", 00:22:06.035 "thread": "nvmf_tgt_poll_group_000", 00:22:06.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.035 "listen_address": { 00:22:06.035 "trtype": "TCP", 00:22:06.035 "adrfam": "IPv4", 00:22:06.035 "traddr": "10.0.0.2", 00:22:06.035 "trsvcid": "4420" 00:22:06.035 }, 00:22:06.035 "peer_address": { 00:22:06.035 "trtype": "TCP", 00:22:06.035 "adrfam": "IPv4", 00:22:06.035 "traddr": "10.0.0.1", 00:22:06.035 "trsvcid": "54312" 00:22:06.035 }, 00:22:06.035 "auth": { 00:22:06.035 "state": "completed", 00:22:06.035 "digest": "sha384", 00:22:06.035 "dhgroup": "ffdhe3072" 00:22:06.035 } 00:22:06.036 } 00:22:06.036 ]' 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.036 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.297 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:06.297 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:06.881 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.141 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.401 00:22:07.401 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.401 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.401 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.662 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.662 { 00:22:07.662 "cntlid": 71, 00:22:07.662 "qid": 0, 00:22:07.662 "state": "enabled", 00:22:07.662 "thread": "nvmf_tgt_poll_group_000", 00:22:07.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.662 "listen_address": { 00:22:07.662 "trtype": "TCP", 00:22:07.662 "adrfam": "IPv4", 00:22:07.662 "traddr": "10.0.0.2", 00:22:07.662 "trsvcid": "4420" 00:22:07.662 }, 00:22:07.662 "peer_address": { 00:22:07.662 "trtype": "TCP", 00:22:07.662 "adrfam": "IPv4", 00:22:07.662 "traddr": "10.0.0.1", 00:22:07.662 "trsvcid": "54344" 00:22:07.662 }, 00:22:07.663 "auth": { 00:22:07.663 "state": "completed", 00:22:07.663 "digest": "sha384", 00:22:07.663 "dhgroup": "ffdhe3072" 00:22:07.663 } 00:22:07.663 } 00:22:07.663 ]' 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.663 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.923 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:07.923 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:08.495 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.495 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.495 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.495 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.495 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.495 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.495 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.495 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.495 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.756 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.017 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.017 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.277 { 00:22:09.277 "cntlid": 73, 00:22:09.277 "qid": 0, 00:22:09.277 "state": "enabled", 00:22:09.277 "thread": "nvmf_tgt_poll_group_000", 00:22:09.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.277 "listen_address": { 00:22:09.277 "trtype": "TCP", 00:22:09.277 "adrfam": "IPv4", 00:22:09.277 "traddr": "10.0.0.2", 00:22:09.277 "trsvcid": "4420" 00:22:09.277 }, 00:22:09.277 "peer_address": { 00:22:09.277 "trtype": "TCP", 00:22:09.277 "adrfam": "IPv4", 00:22:09.277 "traddr": "10.0.0.1", 00:22:09.277 "trsvcid": "48232" 00:22:09.277 }, 00:22:09.277 "auth": { 00:22:09.277 "state": "completed", 00:22:09.277 "digest": "sha384", 00:22:09.277 "dhgroup": "ffdhe4096" 00:22:09.277 } 00:22:09.277 } 00:22:09.277 ]' 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.277 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.538 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:09.538 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:10.109 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.370 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.631 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.631 { 00:22:10.631 "cntlid": 75, 00:22:10.631 "qid": 0, 00:22:10.631 "state": "enabled", 00:22:10.631 "thread": "nvmf_tgt_poll_group_000", 00:22:10.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:10.631 "listen_address": { 00:22:10.631 "trtype": "TCP", 00:22:10.631 "adrfam": "IPv4", 00:22:10.631 "traddr": "10.0.0.2", 00:22:10.631 "trsvcid": "4420" 00:22:10.631 }, 00:22:10.631 "peer_address": { 00:22:10.631 "trtype": "TCP", 00:22:10.631 "adrfam": "IPv4", 00:22:10.631 "traddr": "10.0.0.1", 00:22:10.631 "trsvcid": "48260" 00:22:10.631 }, 00:22:10.631 "auth": { 00:22:10.631 "state": "completed", 00:22:10.631 "digest": "sha384", 00:22:10.631 "dhgroup": "ffdhe4096" 00:22:10.631 } 00:22:10.631 } 00:22:10.631 ]' 00:22:10.631 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:10.892 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.836 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.837 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.837 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.837 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.097 00:22:12.097 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.097 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.097 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.358 { 00:22:12.358 "cntlid": 77, 00:22:12.358 "qid": 0, 00:22:12.358 "state": "enabled", 00:22:12.358 "thread": "nvmf_tgt_poll_group_000", 00:22:12.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:12.358 "listen_address": { 00:22:12.358 "trtype": "TCP", 00:22:12.358 "adrfam": "IPv4", 00:22:12.358 "traddr": "10.0.0.2", 00:22:12.358 "trsvcid": "4420" 00:22:12.358 }, 00:22:12.358 "peer_address": { 00:22:12.358 "trtype": "TCP", 00:22:12.358 "adrfam": "IPv4", 00:22:12.358 "traddr": "10.0.0.1", 00:22:12.358 "trsvcid": "48298" 00:22:12.358 }, 00:22:12.358 "auth": { 00:22:12.358 "state": "completed", 00:22:12.358 "digest": "sha384", 00:22:12.358 "dhgroup": "ffdhe4096" 00:22:12.358 } 00:22:12.358 } 00:22:12.358 ]' 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.358 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.620 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:12.620 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:13.193 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:13.454 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.455 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.715 00:22:13.715 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.715 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.715 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.974 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.974 { 00:22:13.974 "cntlid": 79, 00:22:13.975 "qid": 0, 00:22:13.975 "state": "enabled", 00:22:13.975 "thread": "nvmf_tgt_poll_group_000", 00:22:13.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.975 "listen_address": { 00:22:13.975 "trtype": "TCP", 00:22:13.975 "adrfam": "IPv4", 00:22:13.975 "traddr": "10.0.0.2", 00:22:13.975 "trsvcid": "4420" 00:22:13.975 }, 00:22:13.975 "peer_address": { 00:22:13.975 "trtype": "TCP", 00:22:13.975 "adrfam": "IPv4", 00:22:13.975 "traddr": "10.0.0.1", 00:22:13.975 "trsvcid": "48334" 00:22:13.975 }, 00:22:13.975 "auth": { 00:22:13.975 "state": "completed", 00:22:13.975 "digest": "sha384", 00:22:13.975 "dhgroup": "ffdhe4096" 00:22:13.975 } 00:22:13.975 } 00:22:13.975 ]' 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.975 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.234 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:14.234 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:14.803 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.803 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.803 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.803 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.804 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.804 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.804 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.804 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.804 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.064 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.324 00:22:15.324 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.324 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.324 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.584 { 00:22:15.584 "cntlid": 81, 00:22:15.584 "qid": 0, 00:22:15.584 "state": "enabled", 00:22:15.584 "thread": "nvmf_tgt_poll_group_000", 00:22:15.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:15.584 "listen_address": { 00:22:15.584 "trtype": "TCP", 00:22:15.584 "adrfam": "IPv4", 00:22:15.584 "traddr": "10.0.0.2", 00:22:15.584 "trsvcid": "4420" 00:22:15.584 }, 00:22:15.584 "peer_address": { 00:22:15.584 "trtype": "TCP", 00:22:15.584 "adrfam": "IPv4", 00:22:15.584 "traddr": "10.0.0.1", 00:22:15.584 "trsvcid": "48348" 00:22:15.584 }, 00:22:15.584 "auth": { 00:22:15.584 "state": "completed", 00:22:15.584 "digest": "sha384", 00:22:15.584 "dhgroup": "ffdhe6144" 00:22:15.584 } 00:22:15.584 } 00:22:15.584 ]' 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.584 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.843 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:15.843 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.414 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.674 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.935 00:22:16.935 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.935 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.935 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.195 { 00:22:17.195 "cntlid": 83, 00:22:17.195 "qid": 0, 00:22:17.195 "state": "enabled", 00:22:17.195 "thread": "nvmf_tgt_poll_group_000", 00:22:17.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:17.195 "listen_address": { 00:22:17.195 "trtype": "TCP", 00:22:17.195 "adrfam": "IPv4", 00:22:17.195 "traddr": "10.0.0.2", 00:22:17.195 "trsvcid": "4420" 00:22:17.195 }, 00:22:17.195 "peer_address": { 00:22:17.195 "trtype": "TCP", 00:22:17.195 "adrfam": "IPv4", 00:22:17.195 "traddr": "10.0.0.1", 00:22:17.195 "trsvcid": "48388" 00:22:17.195 }, 00:22:17.195 "auth": { 00:22:17.195 "state": "completed", 00:22:17.195 "digest": "sha384", 00:22:17.195 "dhgroup": "ffdhe6144" 00:22:17.195 } 00:22:17.195 } 00:22:17.195 ]' 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.195 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.458 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.458 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.458 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.458 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:17.458 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:18.028 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.288 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.289 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.289 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.289 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.289 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.549 00:22:18.549 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.549 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.549 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.809 { 00:22:18.809 "cntlid": 85, 00:22:18.809 "qid": 0, 00:22:18.809 "state": "enabled", 00:22:18.809 "thread": "nvmf_tgt_poll_group_000", 00:22:18.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.809 "listen_address": { 00:22:18.809 "trtype": "TCP", 00:22:18.809 "adrfam": "IPv4", 00:22:18.809 "traddr": "10.0.0.2", 00:22:18.809 "trsvcid": "4420" 00:22:18.809 }, 00:22:18.809 "peer_address": { 00:22:18.809 "trtype": "TCP", 00:22:18.809 "adrfam": "IPv4", 00:22:18.809 "traddr": "10.0.0.1", 00:22:18.809 "trsvcid": "35304" 00:22:18.809 }, 00:22:18.809 "auth": { 00:22:18.809 "state": "completed", 00:22:18.809 "digest": "sha384", 00:22:18.809 "dhgroup": "ffdhe6144" 00:22:18.809 } 00:22:18.809 } 00:22:18.809 ]' 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.809 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.070 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.070 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.070 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.070 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:19.070 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:19.640 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.640 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.640 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.640 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.900 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.472 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.472 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.472 { 00:22:20.472 "cntlid": 87, 00:22:20.472 "qid": 0, 00:22:20.472 "state": "enabled", 00:22:20.472 "thread": "nvmf_tgt_poll_group_000", 00:22:20.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:20.472 "listen_address": { 00:22:20.472 "trtype": "TCP", 00:22:20.472 "adrfam": "IPv4", 00:22:20.472 "traddr": "10.0.0.2", 00:22:20.472 "trsvcid": "4420" 00:22:20.472 }, 00:22:20.472 "peer_address": { 00:22:20.472 "trtype": "TCP", 00:22:20.472 "adrfam": "IPv4", 00:22:20.472 "traddr": "10.0.0.1", 00:22:20.472 "trsvcid": "35322" 00:22:20.472 }, 00:22:20.472 "auth": { 00:22:20.472 "state": "completed", 00:22:20.472 "digest": "sha384", 00:22:20.472 "dhgroup": "ffdhe6144" 00:22:20.472 } 00:22:20.472 } 00:22:20.472 ]' 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.472 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.732 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.732 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.732 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.732 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:20.732 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.334 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.595 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.167 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.167 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.427 { 00:22:22.427 "cntlid": 89, 00:22:22.427 "qid": 0, 00:22:22.427 "state": "enabled", 00:22:22.427 "thread": "nvmf_tgt_poll_group_000", 00:22:22.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:22.427 "listen_address": { 00:22:22.427 "trtype": "TCP", 00:22:22.427 "adrfam": "IPv4", 00:22:22.427 "traddr": "10.0.0.2", 00:22:22.427 "trsvcid": "4420" 00:22:22.427 }, 00:22:22.427 "peer_address": { 00:22:22.427 "trtype": "TCP", 00:22:22.427 "adrfam": "IPv4", 00:22:22.427 "traddr": "10.0.0.1", 00:22:22.427 "trsvcid": "35346" 00:22:22.427 }, 00:22:22.427 "auth": { 00:22:22.427 "state": "completed", 00:22:22.427 "digest": "sha384", 00:22:22.427 "dhgroup": "ffdhe8192" 00:22:22.427 } 00:22:22.427 } 00:22:22.427 ]' 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.427 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.687 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:22.687 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.257 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.519 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.779 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.041 { 00:22:24.041 "cntlid": 91, 00:22:24.041 "qid": 0, 00:22:24.041 "state": "enabled", 00:22:24.041 "thread": "nvmf_tgt_poll_group_000", 00:22:24.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.041 "listen_address": { 00:22:24.041 "trtype": "TCP", 00:22:24.041 "adrfam": "IPv4", 00:22:24.041 "traddr": "10.0.0.2", 00:22:24.041 "trsvcid": "4420" 00:22:24.041 }, 00:22:24.041 "peer_address": { 00:22:24.041 "trtype": "TCP", 00:22:24.041 "adrfam": "IPv4", 00:22:24.041 "traddr": "10.0.0.1", 00:22:24.041 "trsvcid": "35384" 00:22:24.041 }, 00:22:24.041 "auth": { 00:22:24.041 "state": "completed", 00:22:24.041 "digest": "sha384", 00:22:24.041 "dhgroup": "ffdhe8192" 00:22:24.041 } 00:22:24.041 } 00:22:24.041 ]' 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.041 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.302 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.302 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.302 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.302 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.302 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.564 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:24.564 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:25.136 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.137 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.709 00:22:25.709 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.709 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.709 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.970 { 00:22:25.970 "cntlid": 93, 00:22:25.970 "qid": 0, 00:22:25.970 "state": "enabled", 00:22:25.970 "thread": "nvmf_tgt_poll_group_000", 00:22:25.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.970 "listen_address": { 00:22:25.970 "trtype": "TCP", 00:22:25.970 "adrfam": "IPv4", 00:22:25.970 "traddr": "10.0.0.2", 00:22:25.970 "trsvcid": "4420" 00:22:25.970 }, 00:22:25.970 "peer_address": { 00:22:25.970 "trtype": "TCP", 00:22:25.970 "adrfam": "IPv4", 00:22:25.970 "traddr": "10.0.0.1", 00:22:25.970 "trsvcid": "35420" 00:22:25.970 }, 00:22:25.970 "auth": { 00:22:25.970 "state": "completed", 00:22:25.970 "digest": "sha384", 00:22:25.970 "dhgroup": "ffdhe8192" 00:22:25.970 } 00:22:25.970 } 00:22:25.970 ]' 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.970 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.231 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:26.231 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:26.801 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:27.061 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:27.061 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.061 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:27.061 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.061 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.062 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.631 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.631 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.631 { 00:22:27.631 "cntlid": 95, 00:22:27.631 "qid": 0, 00:22:27.631 "state": "enabled", 00:22:27.631 "thread": "nvmf_tgt_poll_group_000", 00:22:27.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:27.631 "listen_address": { 00:22:27.631 "trtype": "TCP", 00:22:27.631 "adrfam": "IPv4", 00:22:27.631 "traddr": "10.0.0.2", 00:22:27.631 "trsvcid": "4420" 00:22:27.631 }, 00:22:27.631 "peer_address": { 00:22:27.631 "trtype": "TCP", 00:22:27.631 "adrfam": "IPv4", 00:22:27.631 "traddr": "10.0.0.1", 00:22:27.631 "trsvcid": "35456" 00:22:27.631 }, 00:22:27.631 "auth": { 00:22:27.632 "state": "completed", 00:22:27.632 "digest": "sha384", 00:22:27.632 "dhgroup": "ffdhe8192" 00:22:27.632 } 00:22:27.632 } 00:22:27.632 ]' 00:22:27.632 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.632 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.632 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.893 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.893 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.893 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.893 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.893 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.153 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:28.153 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.724 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.985 00:22:28.985 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.985 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.985 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.246 { 00:22:29.246 "cntlid": 97, 00:22:29.246 "qid": 0, 00:22:29.246 "state": "enabled", 00:22:29.246 "thread": "nvmf_tgt_poll_group_000", 00:22:29.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.246 "listen_address": { 00:22:29.246 "trtype": "TCP", 00:22:29.246 "adrfam": "IPv4", 00:22:29.246 "traddr": "10.0.0.2", 00:22:29.246 "trsvcid": "4420" 00:22:29.246 }, 00:22:29.246 "peer_address": { 00:22:29.246 "trtype": "TCP", 00:22:29.246 "adrfam": "IPv4", 00:22:29.246 "traddr": "10.0.0.1", 00:22:29.246 "trsvcid": "45144" 00:22:29.246 }, 00:22:29.246 "auth": { 00:22:29.246 "state": "completed", 00:22:29.246 "digest": "sha512", 00:22:29.246 "dhgroup": "null" 00:22:29.246 } 00:22:29.246 } 00:22:29.246 ]' 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.246 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.506 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.506 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.506 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.506 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:29.506 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:30.075 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.334 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.594 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.594 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.594 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.594 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.594 00:22:30.594 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.594 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.594 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.853 { 00:22:30.853 "cntlid": 99, 00:22:30.853 "qid": 0, 00:22:30.853 "state": "enabled", 00:22:30.853 "thread": "nvmf_tgt_poll_group_000", 00:22:30.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:30.853 "listen_address": { 00:22:30.853 "trtype": "TCP", 00:22:30.853 "adrfam": "IPv4", 00:22:30.853 "traddr": "10.0.0.2", 00:22:30.853 "trsvcid": "4420" 00:22:30.853 }, 00:22:30.853 "peer_address": { 00:22:30.853 "trtype": "TCP", 00:22:30.853 "adrfam": "IPv4", 00:22:30.853 "traddr": "10.0.0.1", 00:22:30.853 "trsvcid": "45170" 00:22:30.853 }, 00:22:30.853 "auth": { 00:22:30.853 "state": "completed", 00:22:30.853 "digest": "sha512", 00:22:30.853 "dhgroup": "null" 00:22:30.853 } 00:22:30.853 } 00:22:30.853 ]' 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:30.853 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.113 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.113 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.113 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.113 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:31.113 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:31.681 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.940 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.200 00:22:32.200 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.200 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.200 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.460 { 00:22:32.460 "cntlid": 101, 00:22:32.460 "qid": 0, 00:22:32.460 "state": "enabled", 00:22:32.460 "thread": "nvmf_tgt_poll_group_000", 00:22:32.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:32.460 "listen_address": { 00:22:32.460 "trtype": "TCP", 00:22:32.460 "adrfam": "IPv4", 00:22:32.460 "traddr": "10.0.0.2", 00:22:32.460 "trsvcid": "4420" 00:22:32.460 }, 00:22:32.460 "peer_address": { 00:22:32.460 "trtype": "TCP", 00:22:32.460 "adrfam": "IPv4", 00:22:32.460 "traddr": "10.0.0.1", 00:22:32.460 "trsvcid": "45190" 00:22:32.460 }, 00:22:32.460 "auth": { 00:22:32.460 "state": "completed", 00:22:32.460 "digest": "sha512", 00:22:32.460 "dhgroup": "null" 00:22:32.460 } 00:22:32.460 } 00:22:32.460 ]' 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.460 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.460 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:32.460 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.460 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.460 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.460 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.720 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:32.720 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:33.290 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:33.550 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.550 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.810 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.810 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.071 { 00:22:34.071 "cntlid": 103, 00:22:34.071 "qid": 0, 00:22:34.071 "state": "enabled", 00:22:34.071 "thread": "nvmf_tgt_poll_group_000", 00:22:34.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:34.071 "listen_address": { 00:22:34.071 "trtype": "TCP", 00:22:34.071 "adrfam": "IPv4", 00:22:34.071 "traddr": "10.0.0.2", 00:22:34.071 "trsvcid": "4420" 00:22:34.071 }, 00:22:34.071 "peer_address": { 00:22:34.071 "trtype": "TCP", 00:22:34.071 "adrfam": "IPv4", 00:22:34.071 "traddr": "10.0.0.1", 00:22:34.071 "trsvcid": "45200" 00:22:34.071 }, 00:22:34.071 "auth": { 00:22:34.071 "state": "completed", 00:22:34.071 "digest": "sha512", 00:22:34.071 "dhgroup": "null" 00:22:34.071 } 00:22:34.071 } 00:22:34.071 ]' 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.071 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.331 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:34.331 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.928 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.219 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.219 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.535 { 00:22:35.535 "cntlid": 105, 00:22:35.535 "qid": 0, 00:22:35.535 "state": "enabled", 00:22:35.535 "thread": "nvmf_tgt_poll_group_000", 00:22:35.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:35.535 "listen_address": { 00:22:35.535 "trtype": "TCP", 00:22:35.535 "adrfam": "IPv4", 00:22:35.535 "traddr": "10.0.0.2", 00:22:35.535 "trsvcid": "4420" 00:22:35.535 }, 00:22:35.535 "peer_address": { 00:22:35.535 "trtype": "TCP", 00:22:35.535 "adrfam": "IPv4", 00:22:35.535 "traddr": "10.0.0.1", 00:22:35.535 "trsvcid": "45228" 00:22:35.535 }, 00:22:35.535 "auth": { 00:22:35.535 "state": "completed", 00:22:35.535 "digest": "sha512", 00:22:35.535 "dhgroup": "ffdhe2048" 00:22:35.535 } 00:22:35.535 } 00:22:35.535 ]' 00:22:35.535 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.535 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.798 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:35.798 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.368 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.636 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.895 00:22:36.895 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.895 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.895 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.156 { 00:22:37.156 "cntlid": 107, 00:22:37.156 "qid": 0, 00:22:37.156 "state": "enabled", 00:22:37.156 "thread": "nvmf_tgt_poll_group_000", 00:22:37.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:37.156 "listen_address": { 00:22:37.156 "trtype": "TCP", 00:22:37.156 "adrfam": "IPv4", 00:22:37.156 "traddr": "10.0.0.2", 00:22:37.156 "trsvcid": "4420" 00:22:37.156 }, 00:22:37.156 "peer_address": { 00:22:37.156 "trtype": "TCP", 00:22:37.156 "adrfam": "IPv4", 00:22:37.156 "traddr": "10.0.0.1", 00:22:37.156 "trsvcid": "45256" 00:22:37.156 }, 00:22:37.156 "auth": { 00:22:37.156 "state": "completed", 00:22:37.156 "digest": "sha512", 00:22:37.156 "dhgroup": "ffdhe2048" 00:22:37.156 } 00:22:37.156 } 00:22:37.156 ]' 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.156 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.416 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:37.416 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:37.987 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.248 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.248 00:22:38.509 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.509 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.509 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.509 { 00:22:38.509 "cntlid": 109, 00:22:38.509 "qid": 0, 00:22:38.509 "state": "enabled", 00:22:38.509 "thread": "nvmf_tgt_poll_group_000", 00:22:38.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.509 "listen_address": { 00:22:38.509 "trtype": "TCP", 00:22:38.509 "adrfam": "IPv4", 00:22:38.509 "traddr": "10.0.0.2", 00:22:38.509 "trsvcid": "4420" 00:22:38.509 }, 00:22:38.509 "peer_address": { 00:22:38.509 "trtype": "TCP", 00:22:38.509 "adrfam": "IPv4", 00:22:38.509 "traddr": "10.0.0.1", 00:22:38.509 "trsvcid": "51322" 00:22:38.509 }, 00:22:38.509 "auth": { 00:22:38.509 "state": "completed", 00:22:38.509 "digest": "sha512", 00:22:38.509 "dhgroup": "ffdhe2048" 00:22:38.509 } 00:22:38.509 } 00:22:38.509 ]' 00:22:38.509 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.769 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.030 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:39.030 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.603 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.863 00:22:39.863 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.863 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.863 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.124 { 00:22:40.124 "cntlid": 111, 00:22:40.124 "qid": 0, 00:22:40.124 "state": "enabled", 00:22:40.124 "thread": "nvmf_tgt_poll_group_000", 00:22:40.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.124 "listen_address": { 00:22:40.124 "trtype": "TCP", 00:22:40.124 "adrfam": "IPv4", 00:22:40.124 "traddr": "10.0.0.2", 00:22:40.124 "trsvcid": "4420" 00:22:40.124 }, 00:22:40.124 "peer_address": { 00:22:40.124 "trtype": "TCP", 00:22:40.124 "adrfam": "IPv4", 00:22:40.124 "traddr": "10.0.0.1", 00:22:40.124 "trsvcid": "51356" 00:22:40.124 }, 00:22:40.124 "auth": { 00:22:40.124 "state": "completed", 00:22:40.124 "digest": "sha512", 00:22:40.124 "dhgroup": "ffdhe2048" 00:22:40.124 } 00:22:40.124 } 00:22:40.124 ]' 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.124 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.385 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.385 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.385 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.385 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:40.385 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:40.957 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.957 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.957 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.957 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.218 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.480 00:22:41.480 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.480 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.480 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.741 { 00:22:41.741 "cntlid": 113, 00:22:41.741 "qid": 0, 00:22:41.741 "state": "enabled", 00:22:41.741 "thread": "nvmf_tgt_poll_group_000", 00:22:41.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:41.741 "listen_address": { 00:22:41.741 "trtype": "TCP", 00:22:41.741 "adrfam": "IPv4", 00:22:41.741 "traddr": "10.0.0.2", 00:22:41.741 "trsvcid": "4420" 00:22:41.741 }, 00:22:41.741 "peer_address": { 00:22:41.741 "trtype": "TCP", 00:22:41.741 "adrfam": "IPv4", 00:22:41.741 "traddr": "10.0.0.1", 00:22:41.741 "trsvcid": "51384" 00:22:41.741 }, 00:22:41.741 "auth": { 00:22:41.741 "state": "completed", 00:22:41.741 "digest": "sha512", 00:22:41.741 "dhgroup": "ffdhe3072" 00:22:41.741 } 00:22:41.741 } 00:22:41.741 ]' 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.741 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.001 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:42.001 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.572 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.833 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.094 00:22:43.094 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.094 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.094 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.355 { 00:22:43.355 "cntlid": 115, 00:22:43.355 "qid": 0, 00:22:43.355 "state": "enabled", 00:22:43.355 "thread": "nvmf_tgt_poll_group_000", 00:22:43.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.355 "listen_address": { 00:22:43.355 "trtype": "TCP", 00:22:43.355 "adrfam": "IPv4", 00:22:43.355 "traddr": "10.0.0.2", 00:22:43.355 "trsvcid": "4420" 00:22:43.355 }, 00:22:43.355 "peer_address": { 00:22:43.355 "trtype": "TCP", 00:22:43.355 "adrfam": "IPv4", 00:22:43.355 "traddr": "10.0.0.1", 00:22:43.355 "trsvcid": "51396" 00:22:43.355 }, 00:22:43.355 "auth": { 00:22:43.355 "state": "completed", 00:22:43.355 "digest": "sha512", 00:22:43.355 "dhgroup": "ffdhe3072" 00:22:43.355 } 00:22:43.355 } 00:22:43.355 ]' 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.355 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.616 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:43.616 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:44.187 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.187 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.187 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.187 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.187 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.188 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.188 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:44.188 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.449 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.708 00:22:44.708 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.708 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.708 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.968 { 00:22:44.968 "cntlid": 117, 00:22:44.968 "qid": 0, 00:22:44.968 "state": "enabled", 00:22:44.968 "thread": "nvmf_tgt_poll_group_000", 00:22:44.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:44.968 "listen_address": { 00:22:44.968 "trtype": "TCP", 00:22:44.968 "adrfam": "IPv4", 00:22:44.968 "traddr": "10.0.0.2", 00:22:44.968 "trsvcid": "4420" 00:22:44.968 }, 00:22:44.968 "peer_address": { 00:22:44.968 "trtype": "TCP", 00:22:44.968 "adrfam": "IPv4", 00:22:44.968 "traddr": "10.0.0.1", 00:22:44.968 "trsvcid": "51414" 00:22:44.968 }, 00:22:44.968 "auth": { 00:22:44.968 "state": "completed", 00:22:44.968 "digest": "sha512", 00:22:44.968 "dhgroup": "ffdhe3072" 00:22:44.968 } 00:22:44.968 } 00:22:44.968 ]' 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.968 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.228 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:45.229 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:45.800 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.062 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.323 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.323 { 00:22:46.323 "cntlid": 119, 00:22:46.323 "qid": 0, 00:22:46.323 "state": "enabled", 00:22:46.323 "thread": "nvmf_tgt_poll_group_000", 00:22:46.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.323 "listen_address": { 00:22:46.323 "trtype": "TCP", 00:22:46.323 "adrfam": "IPv4", 00:22:46.323 "traddr": "10.0.0.2", 00:22:46.323 "trsvcid": "4420" 00:22:46.323 }, 00:22:46.323 "peer_address": { 00:22:46.323 "trtype": "TCP", 00:22:46.323 "adrfam": "IPv4", 00:22:46.323 "traddr": "10.0.0.1", 00:22:46.323 "trsvcid": "51440" 00:22:46.323 }, 00:22:46.323 "auth": { 00:22:46.323 "state": "completed", 00:22:46.323 "digest": "sha512", 00:22:46.323 "dhgroup": "ffdhe3072" 00:22:46.323 } 00:22:46.323 } 00:22:46.323 ]' 00:22:46.323 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.583 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.583 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.583 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:46.583 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.583 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.583 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.583 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.842 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:46.842 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.419 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.419 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.420 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.420 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.420 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.420 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.679 00:22:47.679 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.679 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.679 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.938 { 00:22:47.938 "cntlid": 121, 00:22:47.938 "qid": 0, 00:22:47.938 "state": "enabled", 00:22:47.938 "thread": "nvmf_tgt_poll_group_000", 00:22:47.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:47.938 "listen_address": { 00:22:47.938 "trtype": "TCP", 00:22:47.938 "adrfam": "IPv4", 00:22:47.938 "traddr": "10.0.0.2", 00:22:47.938 "trsvcid": "4420" 00:22:47.938 }, 00:22:47.938 "peer_address": { 00:22:47.938 "trtype": "TCP", 00:22:47.938 "adrfam": "IPv4", 00:22:47.938 "traddr": "10.0.0.1", 00:22:47.938 "trsvcid": "51452" 00:22:47.938 }, 00:22:47.938 "auth": { 00:22:47.938 "state": "completed", 00:22:47.938 "digest": "sha512", 00:22:47.938 "dhgroup": "ffdhe4096" 00:22:47.938 } 00:22:47.938 } 00:22:47.938 ]' 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.938 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:48.198 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.137 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.397 00:22:49.397 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.397 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.397 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.657 { 00:22:49.657 "cntlid": 123, 00:22:49.657 "qid": 0, 00:22:49.657 "state": "enabled", 00:22:49.657 "thread": "nvmf_tgt_poll_group_000", 00:22:49.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:49.657 "listen_address": { 00:22:49.657 "trtype": "TCP", 00:22:49.657 "adrfam": "IPv4", 00:22:49.657 "traddr": "10.0.0.2", 00:22:49.657 "trsvcid": "4420" 00:22:49.657 }, 00:22:49.657 "peer_address": { 00:22:49.657 "trtype": "TCP", 00:22:49.657 "adrfam": "IPv4", 00:22:49.657 "traddr": "10.0.0.1", 00:22:49.657 "trsvcid": "55090" 00:22:49.657 }, 00:22:49.657 "auth": { 00:22:49.657 "state": "completed", 00:22:49.657 "digest": "sha512", 00:22:49.657 "dhgroup": "ffdhe4096" 00:22:49.657 } 00:22:49.657 } 00:22:49.657 ]' 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.657 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.917 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:49.917 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.488 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.748 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.007 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.007 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.268 { 00:22:51.268 "cntlid": 125, 00:22:51.268 "qid": 0, 00:22:51.268 "state": "enabled", 00:22:51.268 "thread": "nvmf_tgt_poll_group_000", 00:22:51.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:51.268 "listen_address": { 00:22:51.268 "trtype": "TCP", 00:22:51.268 "adrfam": "IPv4", 00:22:51.268 "traddr": "10.0.0.2", 00:22:51.268 "trsvcid": "4420" 00:22:51.268 }, 00:22:51.268 "peer_address": { 00:22:51.268 "trtype": "TCP", 00:22:51.268 "adrfam": "IPv4", 00:22:51.268 "traddr": "10.0.0.1", 00:22:51.268 "trsvcid": "55126" 00:22:51.268 }, 00:22:51.268 "auth": { 00:22:51.268 "state": "completed", 00:22:51.268 "digest": "sha512", 00:22:51.268 "dhgroup": "ffdhe4096" 00:22:51.268 } 00:22:51.268 } 00:22:51.268 ]' 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.268 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.528 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:51.528 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:52.098 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.358 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.618 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.618 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.618 { 00:22:52.618 "cntlid": 127, 00:22:52.618 "qid": 0, 00:22:52.618 "state": "enabled", 00:22:52.618 "thread": "nvmf_tgt_poll_group_000", 00:22:52.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.619 "listen_address": { 00:22:52.619 "trtype": "TCP", 00:22:52.619 "adrfam": "IPv4", 00:22:52.619 "traddr": "10.0.0.2", 00:22:52.619 "trsvcid": "4420" 00:22:52.619 }, 00:22:52.619 "peer_address": { 00:22:52.619 "trtype": "TCP", 00:22:52.619 "adrfam": "IPv4", 00:22:52.619 "traddr": "10.0.0.1", 00:22:52.619 "trsvcid": "55146" 00:22:52.619 }, 00:22:52.619 "auth": { 00:22:52.619 "state": "completed", 00:22:52.619 "digest": "sha512", 00:22:52.619 "dhgroup": "ffdhe4096" 00:22:52.619 } 00:22:52.619 } 00:22:52.619 ]' 00:22:52.619 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.880 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.141 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:53.141 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.712 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.283 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.283 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.283 { 00:22:54.283 "cntlid": 129, 00:22:54.283 "qid": 0, 00:22:54.283 "state": "enabled", 00:22:54.283 "thread": "nvmf_tgt_poll_group_000", 00:22:54.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.283 "listen_address": { 00:22:54.283 "trtype": "TCP", 00:22:54.283 "adrfam": "IPv4", 00:22:54.283 "traddr": "10.0.0.2", 00:22:54.283 "trsvcid": "4420" 00:22:54.283 }, 00:22:54.283 "peer_address": { 00:22:54.283 "trtype": "TCP", 00:22:54.283 "adrfam": "IPv4", 00:22:54.283 "traddr": "10.0.0.1", 00:22:54.283 "trsvcid": "55184" 00:22:54.283 }, 00:22:54.283 "auth": { 00:22:54.283 "state": "completed", 00:22:54.283 "digest": "sha512", 00:22:54.284 "dhgroup": "ffdhe6144" 00:22:54.284 } 00:22:54.284 } 00:22:54.284 ]' 00:22:54.284 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.544 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.544 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.544 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:54.544 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.544 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.544 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.544 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.805 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:54.805 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.380 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.380 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.642 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.642 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.642 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.642 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.927 00:22:55.927 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.927 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.927 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.188 { 00:22:56.188 "cntlid": 131, 00:22:56.188 "qid": 0, 00:22:56.188 "state": "enabled", 00:22:56.188 "thread": "nvmf_tgt_poll_group_000", 00:22:56.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:56.188 "listen_address": { 00:22:56.188 "trtype": "TCP", 00:22:56.188 "adrfam": "IPv4", 00:22:56.188 "traddr": "10.0.0.2", 00:22:56.188 "trsvcid": "4420" 00:22:56.188 }, 00:22:56.188 "peer_address": { 00:22:56.188 "trtype": "TCP", 00:22:56.188 "adrfam": "IPv4", 00:22:56.188 "traddr": "10.0.0.1", 00:22:56.188 "trsvcid": "55214" 00:22:56.188 }, 00:22:56.188 "auth": { 00:22:56.188 "state": "completed", 00:22:56.188 "digest": "sha512", 00:22:56.188 "dhgroup": "ffdhe6144" 00:22:56.188 } 00:22:56.188 } 00:22:56.188 ]' 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.188 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.450 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:56.450 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:22:57.021 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.022 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.283 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.544 00:22:57.544 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.544 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.544 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.804 { 00:22:57.804 "cntlid": 133, 00:22:57.804 "qid": 0, 00:22:57.804 "state": "enabled", 00:22:57.804 "thread": "nvmf_tgt_poll_group_000", 00:22:57.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.804 "listen_address": { 00:22:57.804 "trtype": "TCP", 00:22:57.804 "adrfam": "IPv4", 00:22:57.804 "traddr": "10.0.0.2", 00:22:57.804 "trsvcid": "4420" 00:22:57.804 }, 00:22:57.804 "peer_address": { 00:22:57.804 "trtype": "TCP", 00:22:57.804 "adrfam": "IPv4", 00:22:57.804 "traddr": "10.0.0.1", 00:22:57.804 "trsvcid": "55226" 00:22:57.804 }, 00:22:57.804 "auth": { 00:22:57.804 "state": "completed", 00:22:57.804 "digest": "sha512", 00:22:57.804 "dhgroup": "ffdhe6144" 00:22:57.804 } 00:22:57.804 } 00:22:57.804 ]' 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.804 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.065 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:58.065 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:58.638 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.899 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.159 00:22:59.159 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.159 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.159 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.420 { 00:22:59.420 "cntlid": 135, 00:22:59.420 "qid": 0, 00:22:59.420 "state": "enabled", 00:22:59.420 "thread": "nvmf_tgt_poll_group_000", 00:22:59.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:59.420 "listen_address": { 00:22:59.420 "trtype": "TCP", 00:22:59.420 "adrfam": "IPv4", 00:22:59.420 "traddr": "10.0.0.2", 00:22:59.420 "trsvcid": "4420" 00:22:59.420 }, 00:22:59.420 "peer_address": { 00:22:59.420 "trtype": "TCP", 00:22:59.420 "adrfam": "IPv4", 00:22:59.420 "traddr": "10.0.0.1", 00:22:59.420 "trsvcid": "41014" 00:22:59.420 }, 00:22:59.420 "auth": { 00:22:59.420 "state": "completed", 00:22:59.420 "digest": "sha512", 00:22:59.420 "dhgroup": "ffdhe6144" 00:22:59.420 } 00:22:59.420 } 00:22:59.420 ]' 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.420 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.680 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:22:59.680 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:00.252 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.252 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.252 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.253 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.514 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.087 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.087 { 00:23:01.087 "cntlid": 137, 00:23:01.087 "qid": 0, 00:23:01.087 "state": "enabled", 00:23:01.087 "thread": "nvmf_tgt_poll_group_000", 00:23:01.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.087 "listen_address": { 00:23:01.087 "trtype": "TCP", 00:23:01.087 "adrfam": "IPv4", 00:23:01.087 "traddr": "10.0.0.2", 00:23:01.087 "trsvcid": "4420" 00:23:01.087 }, 00:23:01.087 "peer_address": { 00:23:01.087 "trtype": "TCP", 00:23:01.087 "adrfam": "IPv4", 00:23:01.087 "traddr": "10.0.0.1", 00:23:01.087 "trsvcid": "41034" 00:23:01.087 }, 00:23:01.087 "auth": { 00:23:01.087 "state": "completed", 00:23:01.087 "digest": "sha512", 00:23:01.087 "dhgroup": "ffdhe8192" 00:23:01.087 } 00:23:01.087 } 00:23:01.087 ]' 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.087 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:23:01.347 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:23:01.918 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.179 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.180 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.751 00:23:02.751 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.751 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.751 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.012 { 00:23:03.012 "cntlid": 139, 00:23:03.012 "qid": 0, 00:23:03.012 "state": "enabled", 00:23:03.012 "thread": "nvmf_tgt_poll_group_000", 00:23:03.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:03.012 "listen_address": { 00:23:03.012 "trtype": "TCP", 00:23:03.012 "adrfam": "IPv4", 00:23:03.012 "traddr": "10.0.0.2", 00:23:03.012 "trsvcid": "4420" 00:23:03.012 }, 00:23:03.012 "peer_address": { 00:23:03.012 "trtype": "TCP", 00:23:03.012 "adrfam": "IPv4", 00:23:03.012 "traddr": "10.0.0.1", 00:23:03.012 "trsvcid": "41062" 00:23:03.012 }, 00:23:03.012 "auth": { 00:23:03.012 "state": "completed", 00:23:03.012 "digest": "sha512", 00:23:03.012 "dhgroup": "ffdhe8192" 00:23:03.012 } 00:23:03.012 } 00:23:03.012 ]' 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.012 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.272 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:23:03.272 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: --dhchap-ctrl-secret DHHC-1:02:OWE2ODgzNGQ2NWYzNTVjMTNlMTI2MWEyZDg4MTcxNWQzNTJmZWVhNzZlZjJlNzhkwTwugQ==: 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.842 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.102 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.103 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.363 00:23:04.624 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.624 { 00:23:04.624 "cntlid": 141, 00:23:04.624 "qid": 0, 00:23:04.624 "state": "enabled", 00:23:04.624 "thread": "nvmf_tgt_poll_group_000", 00:23:04.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:04.624 "listen_address": { 00:23:04.624 "trtype": "TCP", 00:23:04.624 "adrfam": "IPv4", 00:23:04.624 "traddr": "10.0.0.2", 00:23:04.624 "trsvcid": "4420" 00:23:04.624 }, 00:23:04.624 "peer_address": { 00:23:04.624 "trtype": "TCP", 00:23:04.624 "adrfam": "IPv4", 00:23:04.624 "traddr": "10.0.0.1", 00:23:04.624 "trsvcid": "41092" 00:23:04.624 }, 00:23:04.624 "auth": { 00:23:04.624 "state": "completed", 00:23:04.624 "digest": "sha512", 00:23:04.624 "dhgroup": "ffdhe8192" 00:23:04.624 } 00:23:04.624 } 00:23:04.624 ]' 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.624 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.885 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.885 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.885 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.885 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.885 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.145 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:23:05.145 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:01:ZDA4NDFmOTJmNmU3YTgzYmRhYzhlYzEyNDdjZTBiNzV4zDP4: 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.717 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.290 00:23:06.290 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.290 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.290 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.550 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.551 { 00:23:06.551 "cntlid": 143, 00:23:06.551 "qid": 0, 00:23:06.551 "state": "enabled", 00:23:06.551 "thread": "nvmf_tgt_poll_group_000", 00:23:06.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:06.551 "listen_address": { 00:23:06.551 "trtype": "TCP", 00:23:06.551 "adrfam": "IPv4", 00:23:06.551 "traddr": "10.0.0.2", 00:23:06.551 "trsvcid": "4420" 00:23:06.551 }, 00:23:06.551 "peer_address": { 00:23:06.551 "trtype": "TCP", 00:23:06.551 "adrfam": "IPv4", 00:23:06.551 "traddr": "10.0.0.1", 00:23:06.551 "trsvcid": "41110" 00:23:06.551 }, 00:23:06.551 "auth": { 00:23:06.551 "state": "completed", 00:23:06.551 "digest": "sha512", 00:23:06.551 "dhgroup": "ffdhe8192" 00:23:06.551 } 00:23:06.551 } 00:23:06.551 ]' 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.551 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.811 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:06.812 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:07.382 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.642 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.212 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.212 { 00:23:08.212 "cntlid": 145, 00:23:08.212 "qid": 0, 00:23:08.212 "state": "enabled", 00:23:08.212 "thread": "nvmf_tgt_poll_group_000", 00:23:08.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.212 "listen_address": { 00:23:08.212 "trtype": "TCP", 00:23:08.212 "adrfam": "IPv4", 00:23:08.212 "traddr": "10.0.0.2", 00:23:08.212 "trsvcid": "4420" 00:23:08.212 }, 00:23:08.212 "peer_address": { 00:23:08.212 "trtype": "TCP", 00:23:08.212 "adrfam": "IPv4", 00:23:08.212 "traddr": "10.0.0.1", 00:23:08.212 "trsvcid": "41138" 00:23:08.212 }, 00:23:08.212 "auth": { 00:23:08.212 "state": "completed", 00:23:08.212 "digest": "sha512", 00:23:08.212 "dhgroup": "ffdhe8192" 00:23:08.212 } 00:23:08.212 } 00:23:08.212 ]' 00:23:08.212 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.472 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.733 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:23:08.733 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NzEzYjAwOTQ2NmJmOGI0MzRmNTQwODFiM2Y5NjUxZTVmZTcyZTMzOTZmOWI5ODNkZ5TE+w==: --dhchap-ctrl-secret DHHC-1:03:MzAyZDkwODBmYTk5ZGI3OTJjOTkyOTQ2YmQzNzlmMGNiYWRiZjhjNTM0YWJkNWJhOWMzYWM0MTVlNGMwMDIxOdgyg0s=: 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:09.354 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:09.355 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:09.615 request: 00:23:09.615 { 00:23:09.615 "name": "nvme0", 00:23:09.615 "trtype": "tcp", 00:23:09.615 "traddr": "10.0.0.2", 00:23:09.615 "adrfam": "ipv4", 00:23:09.615 "trsvcid": "4420", 00:23:09.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:09.615 "prchk_reftag": false, 00:23:09.615 "prchk_guard": false, 00:23:09.615 "hdgst": false, 00:23:09.615 "ddgst": false, 00:23:09.615 "dhchap_key": "key2", 00:23:09.615 "allow_unrecognized_csi": false, 00:23:09.615 "method": "bdev_nvme_attach_controller", 00:23:09.615 "req_id": 1 00:23:09.615 } 00:23:09.615 Got JSON-RPC error response 00:23:09.615 response: 00:23:09.615 { 00:23:09.615 "code": -5, 00:23:09.615 "message": "Input/output error" 00:23:09.615 } 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:09.615 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:10.185 request: 00:23:10.185 { 00:23:10.185 "name": "nvme0", 00:23:10.185 "trtype": "tcp", 00:23:10.185 "traddr": "10.0.0.2", 00:23:10.185 "adrfam": "ipv4", 00:23:10.185 "trsvcid": "4420", 00:23:10.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:10.185 "prchk_reftag": false, 00:23:10.185 "prchk_guard": false, 00:23:10.185 "hdgst": false, 00:23:10.185 "ddgst": false, 00:23:10.185 "dhchap_key": "key1", 00:23:10.185 "dhchap_ctrlr_key": "ckey2", 00:23:10.185 "allow_unrecognized_csi": false, 00:23:10.185 "method": "bdev_nvme_attach_controller", 00:23:10.185 "req_id": 1 00:23:10.185 } 00:23:10.185 Got JSON-RPC error response 00:23:10.185 response: 00:23:10.185 { 00:23:10.185 "code": -5, 00:23:10.185 "message": "Input/output error" 00:23:10.185 } 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.185 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.756 request: 00:23:10.756 { 00:23:10.756 "name": "nvme0", 00:23:10.756 "trtype": "tcp", 00:23:10.756 "traddr": "10.0.0.2", 00:23:10.756 "adrfam": "ipv4", 00:23:10.756 "trsvcid": "4420", 00:23:10.756 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:10.756 "prchk_reftag": false, 00:23:10.756 "prchk_guard": false, 00:23:10.756 "hdgst": false, 00:23:10.756 "ddgst": false, 00:23:10.756 "dhchap_key": "key1", 00:23:10.756 "dhchap_ctrlr_key": "ckey1", 00:23:10.756 "allow_unrecognized_csi": false, 00:23:10.756 "method": "bdev_nvme_attach_controller", 00:23:10.756 "req_id": 1 00:23:10.756 } 00:23:10.756 Got JSON-RPC error response 00:23:10.756 response: 00:23:10.756 { 00:23:10.756 "code": -5, 00:23:10.756 "message": "Input/output error" 00:23:10.756 } 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2800411 ']' 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800411' 00:23:10.756 killing process with pid 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2800411 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2825978 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2825978 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2825978 ']' 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.756 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2825978 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2825978 ']' 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.697 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 null0 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wBB 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.vdZ ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vdZ 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A4c 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.47S ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.47S 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.o2r 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.1GV ]] 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1GV 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.958 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.u7M 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.219 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.220 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.220 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:12.220 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.220 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.789 nvme0n1 00:23:12.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.050 { 00:23:13.050 "cntlid": 1, 00:23:13.050 "qid": 0, 00:23:13.050 "state": "enabled", 00:23:13.050 "thread": "nvmf_tgt_poll_group_000", 00:23:13.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:13.050 "listen_address": { 00:23:13.050 "trtype": "TCP", 00:23:13.050 "adrfam": "IPv4", 00:23:13.050 "traddr": "10.0.0.2", 00:23:13.050 "trsvcid": "4420" 00:23:13.050 }, 00:23:13.050 "peer_address": { 00:23:13.050 "trtype": "TCP", 00:23:13.050 "adrfam": "IPv4", 00:23:13.050 "traddr": "10.0.0.1", 00:23:13.050 "trsvcid": "46258" 00:23:13.050 }, 00:23:13.050 "auth": { 00:23:13.050 "state": "completed", 00:23:13.050 "digest": "sha512", 00:23:13.050 "dhgroup": "ffdhe8192" 00:23:13.050 } 00:23:13.050 } 00:23:13.050 ]' 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.050 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.338 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.338 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.338 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.338 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:13.338 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:13.988 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.249 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.249 request: 00:23:14.249 { 00:23:14.249 "name": "nvme0", 00:23:14.249 "trtype": "tcp", 00:23:14.249 "traddr": "10.0.0.2", 00:23:14.249 "adrfam": "ipv4", 00:23:14.249 "trsvcid": "4420", 00:23:14.249 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:14.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:14.249 "prchk_reftag": false, 00:23:14.249 "prchk_guard": false, 00:23:14.249 "hdgst": false, 00:23:14.249 "ddgst": false, 00:23:14.249 "dhchap_key": "key3", 00:23:14.249 "allow_unrecognized_csi": false, 00:23:14.249 "method": "bdev_nvme_attach_controller", 00:23:14.249 "req_id": 1 00:23:14.249 } 00:23:14.249 Got JSON-RPC error response 00:23:14.249 response: 00:23:14.249 { 00:23:14.249 "code": -5, 00:23:14.249 "message": "Input/output error" 00:23:14.249 } 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:14.510 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.510 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.770 request: 00:23:14.770 { 00:23:14.770 "name": "nvme0", 00:23:14.770 "trtype": "tcp", 00:23:14.770 "traddr": "10.0.0.2", 00:23:14.770 "adrfam": "ipv4", 00:23:14.770 "trsvcid": "4420", 00:23:14.770 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:14.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:14.770 "prchk_reftag": false, 00:23:14.770 "prchk_guard": false, 00:23:14.770 "hdgst": false, 00:23:14.770 "ddgst": false, 00:23:14.770 "dhchap_key": "key3", 00:23:14.770 "allow_unrecognized_csi": false, 00:23:14.770 "method": "bdev_nvme_attach_controller", 00:23:14.770 "req_id": 1 00:23:14.770 } 00:23:14.770 Got JSON-RPC error response 00:23:14.770 response: 00:23:14.770 { 00:23:14.770 "code": -5, 00:23:14.770 "message": "Input/output error" 00:23:14.770 } 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.770 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.031 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:15.292 request: 00:23:15.292 { 00:23:15.292 "name": "nvme0", 00:23:15.292 "trtype": "tcp", 00:23:15.292 "traddr": "10.0.0.2", 00:23:15.292 "adrfam": "ipv4", 00:23:15.292 "trsvcid": "4420", 00:23:15.292 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:15.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:15.292 "prchk_reftag": false, 00:23:15.292 "prchk_guard": false, 00:23:15.292 "hdgst": false, 00:23:15.292 "ddgst": false, 00:23:15.292 "dhchap_key": "key0", 00:23:15.292 "dhchap_ctrlr_key": "key1", 00:23:15.292 "allow_unrecognized_csi": false, 00:23:15.292 "method": "bdev_nvme_attach_controller", 00:23:15.292 "req_id": 1 00:23:15.292 } 00:23:15.292 Got JSON-RPC error response 00:23:15.292 response: 00:23:15.292 { 00:23:15.292 "code": -5, 00:23:15.292 "message": "Input/output error" 00:23:15.292 } 00:23:15.292 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:15.292 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.292 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.292 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.293 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:15.293 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:15.293 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:15.554 nvme0n1 00:23:15.554 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:15.554 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:15.554 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.815 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.815 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.815 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:16.076 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:16.645 nvme0n1 00:23:16.645 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:16.645 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:16.645 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:16.905 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.165 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.165 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:17.165 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: --dhchap-ctrl-secret DHHC-1:03:MGEyZWVhNjZmMDVjNjQwYzEwNmJhMDI0ZTU0YTMwOTA2M2I2MGU3YmQzNDI4OGFlNGJkYjk1ODRlODBkNTM3ZdbasFI=: 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.736 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:17.997 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:18.257 request: 00:23:18.257 { 00:23:18.257 "name": "nvme0", 00:23:18.257 "trtype": "tcp", 00:23:18.257 "traddr": "10.0.0.2", 00:23:18.257 "adrfam": "ipv4", 00:23:18.257 "trsvcid": "4420", 00:23:18.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:18.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:18.257 "prchk_reftag": false, 00:23:18.257 "prchk_guard": false, 00:23:18.257 "hdgst": false, 00:23:18.257 "ddgst": false, 00:23:18.257 "dhchap_key": "key1", 00:23:18.257 "allow_unrecognized_csi": false, 00:23:18.257 "method": "bdev_nvme_attach_controller", 00:23:18.257 "req_id": 1 00:23:18.257 } 00:23:18.257 Got JSON-RPC error response 00:23:18.257 response: 00:23:18.257 { 00:23:18.258 "code": -5, 00:23:18.258 "message": "Input/output error" 00:23:18.258 } 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:18.258 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:19.198 nvme0n1 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.198 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:19.460 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:19.720 nvme0n1 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.720 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: '' 2s 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: ]] 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGQ4NjI4NmZmOGI1ZDk4OWYwNWFhZDkwODU1N2I3ZWK9FsfT: 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:19.982 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: 2s 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: 00:23:22.530 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: ]] 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODAxZjU3NzhmZDNkZDA3ZGY2ZTBhMzgxMWIxMTZhNTE1MDY0ZWNjYWYzZWUyMmMy2Tnqaw==: 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:22.531 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:24.441 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:24.442 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.010 nvme0n1 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.010 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.269 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:25.269 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:25.269 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:25.529 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:25.789 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:25.789 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:25.789 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:26.049 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:26.370 request: 00:23:26.370 { 00:23:26.370 "name": "nvme0", 00:23:26.370 "dhchap_key": "key1", 00:23:26.370 "dhchap_ctrlr_key": "key3", 00:23:26.370 "method": "bdev_nvme_set_keys", 00:23:26.370 "req_id": 1 00:23:26.370 } 00:23:26.370 Got JSON-RPC error response 00:23:26.370 response: 00:23:26.370 { 00:23:26.370 "code": -13, 00:23:26.370 "message": "Permission denied" 00:23:26.370 } 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:26.370 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.636 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:26.636 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:27.603 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:27.603 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:27.603 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:27.863 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:28.432 nvme0n1 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:28.432 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:29.001 request: 00:23:29.001 { 00:23:29.001 "name": "nvme0", 00:23:29.001 "dhchap_key": "key2", 00:23:29.001 "dhchap_ctrlr_key": "key0", 00:23:29.001 "method": "bdev_nvme_set_keys", 00:23:29.001 "req_id": 1 00:23:29.001 } 00:23:29.001 Got JSON-RPC error response 00:23:29.001 response: 00:23:29.001 { 00:23:29.001 "code": -13, 00:23:29.001 "message": "Permission denied" 00:23:29.001 } 00:23:29.001 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:29.001 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:29.001 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:29.001 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:29.001 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:29.002 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:29.002 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.261 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:29.261 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:30.204 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:30.204 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:30.204 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2800490 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2800490 ']' 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2800490 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800490 00:23:30.465 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.466 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.466 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800490' 00:23:30.466 killing process with pid 2800490 00:23:30.466 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2800490 00:23:30.466 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2800490 00:23:30.726 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:30.726 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.726 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.727 rmmod nvme_tcp 00:23:30.727 rmmod nvme_fabrics 00:23:30.727 rmmod nvme_keyring 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2825978 ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2825978 ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825978' 00:23:30.727 killing process with pid 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2825978 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.727 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wBB /tmp/spdk.key-sha256.A4c /tmp/spdk.key-sha384.o2r /tmp/spdk.key-sha512.u7M /tmp/spdk.key-sha512.vdZ /tmp/spdk.key-sha384.47S /tmp/spdk.key-sha256.1GV '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:33.277 00:23:33.277 real 2m32.753s 00:23:33.277 user 5m44.622s 00:23:33.277 sys 0m21.991s 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.277 ************************************ 00:23:33.277 END TEST nvmf_auth_target 00:23:33.277 ************************************ 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:33.277 ************************************ 00:23:33.277 START TEST nvmf_bdevio_no_huge 00:23:33.277 ************************************ 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:33.277 * Looking for test storage... 00:23:33.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.277 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.278 --rc genhtml_branch_coverage=1 00:23:33.278 --rc genhtml_function_coverage=1 00:23:33.278 --rc genhtml_legend=1 00:23:33.278 --rc geninfo_all_blocks=1 00:23:33.278 --rc geninfo_unexecuted_blocks=1 00:23:33.278 00:23:33.278 ' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.278 --rc genhtml_branch_coverage=1 00:23:33.278 --rc genhtml_function_coverage=1 00:23:33.278 --rc genhtml_legend=1 00:23:33.278 --rc geninfo_all_blocks=1 00:23:33.278 --rc geninfo_unexecuted_blocks=1 00:23:33.278 00:23:33.278 ' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.278 --rc genhtml_branch_coverage=1 00:23:33.278 --rc genhtml_function_coverage=1 00:23:33.278 --rc genhtml_legend=1 00:23:33.278 --rc geninfo_all_blocks=1 00:23:33.278 --rc geninfo_unexecuted_blocks=1 00:23:33.278 00:23:33.278 ' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.278 --rc genhtml_branch_coverage=1 00:23:33.278 --rc genhtml_function_coverage=1 00:23:33.278 --rc genhtml_legend=1 00:23:33.278 --rc geninfo_all_blocks=1 00:23:33.278 --rc geninfo_unexecuted_blocks=1 00:23:33.278 00:23:33.278 ' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.278 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.427 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.427 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.428 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.428 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.428 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.428 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:41.428 00:23:41.428 --- 10.0.0.2 ping statistics --- 00:23:41.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.428 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:41.428 00:23:41.428 --- 10.0.0.1 ping statistics --- 00:23:41.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.428 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2834102 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2834102 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2834102 ']' 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.428 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.428 [2024-12-06 14:17:29.278317] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:41.428 [2024-12-06 14:17:29.278388] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:41.428 [2024-12-06 14:17:29.387815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.428 [2024-12-06 14:17:29.448417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.428 [2024-12-06 14:17:29.448480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.428 [2024-12-06 14:17:29.448489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.428 [2024-12-06 14:17:29.448496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.428 [2024-12-06 14:17:29.448503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.428 [2024-12-06 14:17:29.450025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.428 [2024-12-06 14:17:29.450183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:41.428 [2024-12-06 14:17:29.450342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.428 [2024-12-06 14:17:29.450342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 [2024-12-06 14:17:30.154632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 Malloc0 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.690 [2024-12-06 14:17:30.208906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.690 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.690 { 00:23:41.690 "params": { 00:23:41.690 "name": "Nvme$subsystem", 00:23:41.690 "trtype": "$TEST_TRANSPORT", 00:23:41.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.690 "adrfam": "ipv4", 00:23:41.691 "trsvcid": "$NVMF_PORT", 00:23:41.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.691 "hdgst": ${hdgst:-false}, 00:23:41.691 "ddgst": ${ddgst:-false} 00:23:41.691 }, 00:23:41.691 "method": "bdev_nvme_attach_controller" 00:23:41.691 } 00:23:41.691 EOF 00:23:41.691 )") 00:23:41.691 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:41.691 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:41.691 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:41.691 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:41.691 "params": { 00:23:41.691 "name": "Nvme1", 00:23:41.691 "trtype": "tcp", 00:23:41.691 "traddr": "10.0.0.2", 00:23:41.691 "adrfam": "ipv4", 00:23:41.691 "trsvcid": "4420", 00:23:41.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.691 "hdgst": false, 00:23:41.691 "ddgst": false 00:23:41.691 }, 00:23:41.691 "method": "bdev_nvme_attach_controller" 00:23:41.691 }' 00:23:41.691 [2024-12-06 14:17:30.268137] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:41.691 [2024-12-06 14:17:30.268202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2834243 ] 00:23:41.951 [2024-12-06 14:17:30.366984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.951 [2024-12-06 14:17:30.427421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.951 [2024-12-06 14:17:30.427585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.951 [2024-12-06 14:17:30.427742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.211 I/O targets: 00:23:42.211 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:42.211 00:23:42.211 00:23:42.211 CUnit - A unit testing framework for C - Version 2.1-3 00:23:42.211 http://cunit.sourceforge.net/ 00:23:42.211 00:23:42.211 00:23:42.211 Suite: bdevio tests on: Nvme1n1 00:23:42.211 Test: blockdev write read block ...passed 00:23:42.211 Test: blockdev write zeroes read block ...passed 00:23:42.211 Test: blockdev write zeroes read no split ...passed 00:23:42.472 Test: blockdev write zeroes read split ...passed 00:23:42.472 Test: blockdev write zeroes read split partial ...passed 00:23:42.472 Test: blockdev reset ...[2024-12-06 14:17:30.914112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:42.472 [2024-12-06 14:17:30.914211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186c520 (9): Bad file descriptor 00:23:42.472 [2024-12-06 14:17:30.968961] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:42.472 passed 00:23:42.472 Test: blockdev write read 8 blocks ...passed 00:23:42.472 Test: blockdev write read size > 128k ...passed 00:23:42.472 Test: blockdev write read invalid size ...passed 00:23:42.472 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:42.472 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:42.472 Test: blockdev write read max offset ...passed 00:23:42.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:42.732 Test: blockdev writev readv 8 blocks ...passed 00:23:42.732 Test: blockdev writev readv 30 x 1block ...passed 00:23:42.732 Test: blockdev writev readv block ...passed 00:23:42.732 Test: blockdev writev readv size > 128k ...passed 00:23:42.732 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:42.732 Test: blockdev comparev and writev ...[2024-12-06 14:17:31.237517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.237569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.237586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.237595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.238156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.238172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.238187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.238196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.238684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.238699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.238713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.238722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.239247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.239262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.239276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.732 [2024-12-06 14:17:31.239298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.732 passed 00:23:42.732 Test: blockdev nvme passthru rw ...passed 00:23:42.732 Test: blockdev nvme passthru vendor specific ...[2024-12-06 14:17:31.323331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.732 [2024-12-06 14:17:31.323350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.323748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.732 [2024-12-06 14:17:31.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.324135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.732 [2024-12-06 14:17:31.324148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:42.732 [2024-12-06 14:17:31.324533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.732 [2024-12-06 14:17:31.324548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.732 passed 00:23:42.732 Test: blockdev nvme admin passthru ...passed 00:23:42.994 Test: blockdev copy ...passed 00:23:42.994 00:23:42.994 Run Summary: Type Total Ran Passed Failed Inactive 00:23:42.994 suites 1 1 n/a 0 0 00:23:42.994 tests 23 23 23 0 0 00:23:42.994 asserts 152 152 152 0 n/a 00:23:42.994 00:23:42.994 Elapsed time = 1.304 seconds 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.254 rmmod nvme_tcp 00:23:43.254 rmmod nvme_fabrics 00:23:43.254 rmmod nvme_keyring 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2834102 ']' 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2834102 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2834102 ']' 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2834102 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834102 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:43.254 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834102' 00:23:43.255 killing process with pid 2834102 00:23:43.255 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2834102 00:23:43.255 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2834102 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.515 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.054 00:23:46.054 real 0m12.657s 00:23:46.054 user 0m15.332s 00:23:46.054 sys 0m6.627s 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:46.054 ************************************ 00:23:46.054 END TEST nvmf_bdevio_no_huge 00:23:46.054 ************************************ 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.054 ************************************ 00:23:46.054 START TEST nvmf_tls 00:23:46.054 ************************************ 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:46.054 * Looking for test storage... 00:23:46.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.054 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.055 --rc genhtml_branch_coverage=1 00:23:46.055 --rc genhtml_function_coverage=1 00:23:46.055 --rc genhtml_legend=1 00:23:46.055 --rc geninfo_all_blocks=1 00:23:46.055 --rc geninfo_unexecuted_blocks=1 00:23:46.055 00:23:46.055 ' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.055 --rc genhtml_branch_coverage=1 00:23:46.055 --rc genhtml_function_coverage=1 00:23:46.055 --rc genhtml_legend=1 00:23:46.055 --rc geninfo_all_blocks=1 00:23:46.055 --rc geninfo_unexecuted_blocks=1 00:23:46.055 00:23:46.055 ' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.055 --rc genhtml_branch_coverage=1 00:23:46.055 --rc genhtml_function_coverage=1 00:23:46.055 --rc genhtml_legend=1 00:23:46.055 --rc geninfo_all_blocks=1 00:23:46.055 --rc geninfo_unexecuted_blocks=1 00:23:46.055 00:23:46.055 ' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.055 --rc genhtml_branch_coverage=1 00:23:46.055 --rc genhtml_function_coverage=1 00:23:46.055 --rc genhtml_legend=1 00:23:46.055 --rc geninfo_all_blocks=1 00:23:46.055 --rc geninfo_unexecuted_blocks=1 00:23:46.055 00:23:46.055 ' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:46.055 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.056 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:54.199 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:54.199 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:54.199 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:54.199 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.199 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:23:54.200 00:23:54.200 --- 10.0.0.2 ping statistics --- 00:23:54.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.200 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:54.200 00:23:54.200 --- 10.0.0.1 ping statistics --- 00:23:54.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.200 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.200 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2838893 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2838893 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2838893 ']' 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.200 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.200 [2024-12-06 14:17:42.076731] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:23:54.200 [2024-12-06 14:17:42.076797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.200 [2024-12-06 14:17:42.180649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.200 [2024-12-06 14:17:42.230810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.200 [2024-12-06 14:17:42.230861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.200 [2024-12-06 14:17:42.230870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.200 [2024-12-06 14:17:42.230877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.200 [2024-12-06 14:17:42.230883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.200 [2024-12-06 14:17:42.231629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.461 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.461 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:54.462 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:54.723 true 00:23:54.723 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:54.723 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:54.723 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:54.723 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:54.723 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:54.984 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:54.984 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:55.245 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:55.245 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:55.245 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:55.507 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:55.507 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:55.507 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:55.507 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:55.507 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:55.507 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:55.767 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:55.767 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:55.767 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:56.028 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:56.028 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.028 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:56.028 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:56.028 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:56.288 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:56.288 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:56.549 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:56.549 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1oC1httRft 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.bImA7pkuRb 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1oC1httRft 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.bImA7pkuRb 00:23:56.550 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:56.809 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:57.068 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1oC1httRft 00:23:57.069 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1oC1httRft 00:23:57.069 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:57.069 [2024-12-06 14:17:45.630835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.069 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:57.328 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:57.328 [2024-12-06 14:17:45.963639] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.328 [2024-12-06 14:17:45.963854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.587 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:57.587 malloc0 00:23:57.587 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:57.846 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1oC1httRft 00:23:57.846 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:58.107 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1oC1httRft 00:24:10.334 Initializing NVMe Controllers 00:24:10.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:10.334 Initialization complete. Launching workers. 00:24:10.334 ======================================================== 00:24:10.334 Latency(us) 00:24:10.334 Device Information : IOPS MiB/s Average min max 00:24:10.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18649.79 72.85 3431.90 1019.24 4217.86 00:24:10.334 ======================================================== 00:24:10.334 Total : 18649.79 72.85 3431.90 1019.24 4217.86 00:24:10.334 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1oC1httRft 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1oC1httRft 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2841647 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2841647 /var/tmp/bdevperf.sock 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2841647 ']' 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.334 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.334 [2024-12-06 14:17:56.819158] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:10.334 [2024-12-06 14:17:56.819214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841647 ] 00:24:10.334 [2024-12-06 14:17:56.907199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.334 [2024-12-06 14:17:56.942391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.334 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.334 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.334 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1oC1httRft 00:24:10.334 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.334 [2024-12-06 14:17:57.923111] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.334 TLSTESTn1 00:24:10.334 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:10.334 Running I/O for 10 seconds... 00:24:11.548 4091.00 IOPS, 15.98 MiB/s [2024-12-06T13:18:01.130Z] 5088.00 IOPS, 19.88 MiB/s [2024-12-06T13:18:02.511Z] 5294.33 IOPS, 20.68 MiB/s [2024-12-06T13:18:03.453Z] 5339.25 IOPS, 20.86 MiB/s [2024-12-06T13:18:04.191Z] 5558.80 IOPS, 21.71 MiB/s [2024-12-06T13:18:05.154Z] 5706.00 IOPS, 22.29 MiB/s [2024-12-06T13:18:06.535Z] 5576.57 IOPS, 21.78 MiB/s [2024-12-06T13:18:07.475Z] 5509.12 IOPS, 21.52 MiB/s [2024-12-06T13:18:08.410Z] 5605.56 IOPS, 21.90 MiB/s [2024-12-06T13:18:08.410Z] 5706.60 IOPS, 22.29 MiB/s 00:24:19.770 Latency(us) 00:24:19.770 [2024-12-06T13:18:08.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:19.770 Verification LBA range: start 0x0 length 0x2000 00:24:19.770 TLSTESTn1 : 10.01 5712.62 22.31 0.00 0.00 22373.62 4450.99 39321.60 00:24:19.770 [2024-12-06T13:18:08.410Z] =================================================================================================================== 00:24:19.770 [2024-12-06T13:18:08.410Z] Total : 5712.62 22.31 0.00 0.00 22373.62 4450.99 39321.60 00:24:19.770 { 00:24:19.770 "results": [ 00:24:19.770 { 00:24:19.770 "job": "TLSTESTn1", 00:24:19.770 "core_mask": "0x4", 00:24:19.770 "workload": "verify", 00:24:19.770 "status": "finished", 00:24:19.770 "verify_range": { 00:24:19.770 "start": 0, 00:24:19.770 "length": 8192 00:24:19.770 }, 00:24:19.770 "queue_depth": 128, 00:24:19.770 "io_size": 4096, 00:24:19.770 "runtime": 10.011685, 00:24:19.770 "iops": 5712.624797923626, 00:24:19.770 "mibps": 22.314940616889164, 00:24:19.770 "io_failed": 0, 00:24:19.770 "io_timeout": 0, 00:24:19.770 "avg_latency_us": 22373.618000338036, 00:24:19.770 "min_latency_us": 4450.986666666667, 00:24:19.770 "max_latency_us": 39321.6 00:24:19.770 } 00:24:19.770 ], 00:24:19.770 "core_count": 1 00:24:19.770 } 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2841647 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2841647 ']' 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2841647 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841647 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841647' 00:24:19.771 killing process with pid 2841647 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2841647 00:24:19.771 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.771 00:24:19.771 Latency(us) 00:24:19.771 [2024-12-06T13:18:08.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.771 [2024-12-06T13:18:08.411Z] =================================================================================================================== 00:24:19.771 [2024-12-06T13:18:08.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2841647 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bImA7pkuRb 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bImA7pkuRb 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bImA7pkuRb 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bImA7pkuRb 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2844101 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2844101 /var/tmp/bdevperf.sock 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2844101 ']' 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.771 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.771 [2024-12-06 14:18:08.390303] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:19.771 [2024-12-06 14:18:08.390357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844101 ] 00:24:20.030 [2024-12-06 14:18:08.471422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.030 [2024-12-06 14:18:08.499947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.599 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.599 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.599 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bImA7pkuRb 00:24:20.859 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:20.859 [2024-12-06 14:18:09.483885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.859 [2024-12-06 14:18:09.490554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:20.859 [2024-12-06 14:18:09.491133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b6830 (107): Transport endpoint is not connected 00:24:20.859 [2024-12-06 14:18:09.492129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b6830 (9): Bad file descriptor 00:24:20.859 [2024-12-06 14:18:09.493131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:20.859 [2024-12-06 14:18:09.493140] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:20.859 [2024-12-06 14:18:09.493146] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:20.859 [2024-12-06 14:18:09.493155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:20.859 request: 00:24:20.859 { 00:24:20.859 "name": "TLSTEST", 00:24:20.859 "trtype": "tcp", 00:24:20.859 "traddr": "10.0.0.2", 00:24:20.859 "adrfam": "ipv4", 00:24:20.859 "trsvcid": "4420", 00:24:20.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.859 "prchk_reftag": false, 00:24:20.859 "prchk_guard": false, 00:24:20.859 "hdgst": false, 00:24:20.859 "ddgst": false, 00:24:20.859 "psk": "key0", 00:24:20.859 "allow_unrecognized_csi": false, 00:24:20.859 "method": "bdev_nvme_attach_controller", 00:24:20.859 "req_id": 1 00:24:20.859 } 00:24:20.859 Got JSON-RPC error response 00:24:20.859 response: 00:24:20.859 { 00:24:20.859 "code": -5, 00:24:20.859 "message": "Input/output error" 00:24:20.859 } 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2844101 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2844101 ']' 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2844101 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2844101 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2844101' 00:24:21.119 killing process with pid 2844101 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2844101 00:24:21.119 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.119 00:24:21.119 Latency(us) 00:24:21.119 [2024-12-06T13:18:09.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.119 [2024-12-06T13:18:09.759Z] =================================================================================================================== 00:24:21.119 [2024-12-06T13:18:09.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2844101 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1oC1httRft 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1oC1httRft 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1oC1httRft 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1oC1httRft 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2844583 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2844583 /var/tmp/bdevperf.sock 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2844583 ']' 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.119 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.119 [2024-12-06 14:18:09.726108] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:21.119 [2024-12-06 14:18:09.726164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844583 ] 00:24:21.379 [2024-12-06 14:18:09.809256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.379 [2024-12-06 14:18:09.837810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.946 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.946 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.946 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1oC1httRft 00:24:22.205 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:22.205 [2024-12-06 14:18:10.834306] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.465 [2024-12-06 14:18:10.843751] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:22.465 [2024-12-06 14:18:10.843771] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:22.465 [2024-12-06 14:18:10.843791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:22.465 [2024-12-06 14:18:10.844441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f830 (107): Transport endpoint is not connected 00:24:22.465 [2024-12-06 14:18:10.845436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f830 (9): Bad file descriptor 00:24:22.465 [2024-12-06 14:18:10.846438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:22.465 [2024-12-06 14:18:10.846447] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:22.465 [2024-12-06 14:18:10.846453] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:22.465 [2024-12-06 14:18:10.846464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:22.465 request: 00:24:22.465 { 00:24:22.465 "name": "TLSTEST", 00:24:22.465 "trtype": "tcp", 00:24:22.465 "traddr": "10.0.0.2", 00:24:22.465 "adrfam": "ipv4", 00:24:22.465 "trsvcid": "4420", 00:24:22.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.465 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:22.465 "prchk_reftag": false, 00:24:22.465 "prchk_guard": false, 00:24:22.465 "hdgst": false, 00:24:22.465 "ddgst": false, 00:24:22.465 "psk": "key0", 00:24:22.465 "allow_unrecognized_csi": false, 00:24:22.465 "method": "bdev_nvme_attach_controller", 00:24:22.465 "req_id": 1 00:24:22.465 } 00:24:22.465 Got JSON-RPC error response 00:24:22.465 response: 00:24:22.465 { 00:24:22.465 "code": -5, 00:24:22.465 "message": "Input/output error" 00:24:22.465 } 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2844583 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2844583 ']' 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2844583 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2844583 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2844583' 00:24:22.465 killing process with pid 2844583 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2844583 00:24:22.465 Received shutdown signal, test time was about 10.000000 seconds 00:24:22.465 00:24:22.465 Latency(us) 00:24:22.465 [2024-12-06T13:18:11.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.465 [2024-12-06T13:18:11.105Z] =================================================================================================================== 00:24:22.465 [2024-12-06T13:18:11.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.465 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2844583 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1oC1httRft 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1oC1httRft 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1oC1httRft 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1oC1httRft 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2844958 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2844958 /var/tmp/bdevperf.sock 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2844958 ']' 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.465 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.465 [2024-12-06 14:18:11.079742] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:22.465 [2024-12-06 14:18:11.079799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844958 ] 00:24:22.725 [2024-12-06 14:18:11.162140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.725 [2024-12-06 14:18:11.190704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.297 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.297 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.297 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1oC1httRft 00:24:23.557 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.818 [2024-12-06 14:18:12.202746] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.818 [2024-12-06 14:18:12.207268] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:23.818 [2024-12-06 14:18:12.207288] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:23.818 [2024-12-06 14:18:12.207308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:23.818 [2024-12-06 14:18:12.207948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394830 (107): Transport endpoint is not connected 00:24:23.818 [2024-12-06 14:18:12.208943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394830 (9): Bad file descriptor 00:24:23.818 [2024-12-06 14:18:12.209945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:23.818 [2024-12-06 14:18:12.209953] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:23.818 [2024-12-06 14:18:12.209959] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:23.818 [2024-12-06 14:18:12.209966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:23.818 request: 00:24:23.818 { 00:24:23.818 "name": "TLSTEST", 00:24:23.818 "trtype": "tcp", 00:24:23.818 "traddr": "10.0.0.2", 00:24:23.818 "adrfam": "ipv4", 00:24:23.818 "trsvcid": "4420", 00:24:23.818 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.818 "prchk_reftag": false, 00:24:23.818 "prchk_guard": false, 00:24:23.818 "hdgst": false, 00:24:23.818 "ddgst": false, 00:24:23.818 "psk": "key0", 00:24:23.818 "allow_unrecognized_csi": false, 00:24:23.818 "method": "bdev_nvme_attach_controller", 00:24:23.818 "req_id": 1 00:24:23.818 } 00:24:23.818 Got JSON-RPC error response 00:24:23.818 response: 00:24:23.818 { 00:24:23.818 "code": -5, 00:24:23.818 "message": "Input/output error" 00:24:23.818 } 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2844958 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2844958 ']' 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2844958 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2844958 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2844958' 00:24:23.818 killing process with pid 2844958 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2844958 00:24:23.818 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.818 00:24:23.818 Latency(us) 00:24:23.818 [2024-12-06T13:18:12.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.818 [2024-12-06T13:18:12.458Z] =================================================================================================================== 00:24:23.818 [2024-12-06T13:18:12.458Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2844958 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.818 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2845272 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2845272 /var/tmp/bdevperf.sock 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2845272 ']' 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.819 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.079 [2024-12-06 14:18:12.455530] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:24.080 [2024-12-06 14:18:12.455586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845272 ] 00:24:24.080 [2024-12-06 14:18:12.537880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.080 [2024-12-06 14:18:12.566317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.652 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.652 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.652 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:24.913 [2024-12-06 14:18:13.389812] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:24.913 [2024-12-06 14:18:13.389834] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:24.913 request: 00:24:24.913 { 00:24:24.913 "name": "key0", 00:24:24.913 "path": "", 00:24:24.913 "method": "keyring_file_add_key", 00:24:24.913 "req_id": 1 00:24:24.913 } 00:24:24.913 Got JSON-RPC error response 00:24:24.913 response: 00:24:24.913 { 00:24:24.913 "code": -1, 00:24:24.913 "message": "Operation not permitted" 00:24:24.913 } 00:24:24.913 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.175 [2024-12-06 14:18:13.558308] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.175 [2024-12-06 14:18:13.558332] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:25.175 request: 00:24:25.175 { 00:24:25.175 "name": "TLSTEST", 00:24:25.175 "trtype": "tcp", 00:24:25.175 "traddr": "10.0.0.2", 00:24:25.175 "adrfam": "ipv4", 00:24:25.175 "trsvcid": "4420", 00:24:25.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.175 "prchk_reftag": false, 00:24:25.175 "prchk_guard": false, 00:24:25.175 "hdgst": false, 00:24:25.175 "ddgst": false, 00:24:25.175 "psk": "key0", 00:24:25.175 "allow_unrecognized_csi": false, 00:24:25.175 "method": "bdev_nvme_attach_controller", 00:24:25.175 "req_id": 1 00:24:25.175 } 00:24:25.175 Got JSON-RPC error response 00:24:25.175 response: 00:24:25.175 { 00:24:25.175 "code": -126, 00:24:25.175 "message": "Required key not available" 00:24:25.175 } 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2845272 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2845272 ']' 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2845272 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845272 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845272' 00:24:25.175 killing process with pid 2845272 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2845272 00:24:25.175 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.175 00:24:25.175 Latency(us) 00:24:25.175 [2024-12-06T13:18:13.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.175 [2024-12-06T13:18:13.815Z] =================================================================================================================== 00:24:25.175 [2024-12-06T13:18:13.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2845272 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.175 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2838893 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2838893 ']' 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2838893 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.176 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838893 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838893' 00:24:25.437 killing process with pid 2838893 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2838893 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2838893 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.y62da5yQbO 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.y62da5yQbO 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2845631 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2845631 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2845631 ']' 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.437 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.437 [2024-12-06 14:18:14.040726] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:25.437 [2024-12-06 14:18:14.040783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.697 [2024-12-06 14:18:14.134431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.697 [2024-12-06 14:18:14.164689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.697 [2024-12-06 14:18:14.164723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.697 [2024-12-06 14:18:14.164729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.697 [2024-12-06 14:18:14.164734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.697 [2024-12-06 14:18:14.164738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.697 [2024-12-06 14:18:14.165196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.y62da5yQbO 00:24:26.271 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:26.532 [2024-12-06 14:18:15.034731] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.533 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:26.792 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:26.792 [2024-12-06 14:18:15.395603] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.792 [2024-12-06 14:18:15.395789] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.792 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:27.052 malloc0 00:24:27.052 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:27.312 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:27.572 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.572 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y62da5yQbO 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y62da5yQbO 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2846010 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2846010 /var/tmp/bdevperf.sock 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2846010 ']' 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.573 [2024-12-06 14:18:16.176440] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:27.573 [2024-12-06 14:18:16.176503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846010 ] 00:24:27.833 [2024-12-06 14:18:16.263184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.833 [2024-12-06 14:18:16.291907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.405 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.405 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.405 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:28.666 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.927 [2024-12-06 14:18:17.307990] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.927 TLSTESTn1 00:24:28.927 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:28.927 Running I/O for 10 seconds... 00:24:31.251 4351.00 IOPS, 17.00 MiB/s [2024-12-06T13:18:20.833Z] 5101.00 IOPS, 19.93 MiB/s [2024-12-06T13:18:21.774Z] 5404.33 IOPS, 21.11 MiB/s [2024-12-06T13:18:22.715Z] 5294.25 IOPS, 20.68 MiB/s [2024-12-06T13:18:23.654Z] 5201.00 IOPS, 20.32 MiB/s [2024-12-06T13:18:24.595Z] 5179.33 IOPS, 20.23 MiB/s [2024-12-06T13:18:25.530Z] 5296.00 IOPS, 20.69 MiB/s [2024-12-06T13:18:26.906Z] 5271.62 IOPS, 20.59 MiB/s [2024-12-06T13:18:27.842Z] 5201.22 IOPS, 20.32 MiB/s [2024-12-06T13:18:27.842Z] 5205.00 IOPS, 20.33 MiB/s 00:24:39.202 Latency(us) 00:24:39.202 [2024-12-06T13:18:27.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.202 Verification LBA range: start 0x0 length 0x2000 00:24:39.202 TLSTESTn1 : 10.02 5206.48 20.34 0.00 0.00 24545.53 4560.21 23374.51 00:24:39.202 [2024-12-06T13:18:27.842Z] =================================================================================================================== 00:24:39.202 [2024-12-06T13:18:27.842Z] Total : 5206.48 20.34 0.00 0.00 24545.53 4560.21 23374.51 00:24:39.202 { 00:24:39.202 "results": [ 00:24:39.202 { 00:24:39.202 "job": "TLSTESTn1", 00:24:39.202 "core_mask": "0x4", 00:24:39.202 "workload": "verify", 00:24:39.202 "status": "finished", 00:24:39.202 "verify_range": { 00:24:39.202 "start": 0, 00:24:39.202 "length": 8192 00:24:39.202 }, 00:24:39.202 "queue_depth": 128, 00:24:39.202 "io_size": 4096, 00:24:39.202 "runtime": 10.021546, 00:24:39.202 "iops": 5206.482113637956, 00:24:39.202 "mibps": 20.337820756398266, 00:24:39.202 "io_failed": 0, 00:24:39.202 "io_timeout": 0, 00:24:39.202 "avg_latency_us": 24545.528131807758, 00:24:39.202 "min_latency_us": 4560.213333333333, 00:24:39.202 "max_latency_us": 23374.506666666668 00:24:39.202 } 00:24:39.202 ], 00:24:39.202 "core_count": 1 00:24:39.202 } 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2846010 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2846010 ']' 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2846010 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846010 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846010' 00:24:39.202 killing process with pid 2846010 00:24:39.202 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2846010 00:24:39.203 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.203 00:24:39.203 Latency(us) 00:24:39.203 [2024-12-06T13:18:27.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.203 [2024-12-06T13:18:27.843Z] =================================================================================================================== 00:24:39.203 [2024-12-06T13:18:27.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2846010 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.y62da5yQbO 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y62da5yQbO 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y62da5yQbO 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.y62da5yQbO 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.y62da5yQbO 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2848332 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2848332 /var/tmp/bdevperf.sock 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2848332 ']' 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.203 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.203 [2024-12-06 14:18:27.781633] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:39.203 [2024-12-06 14:18:27.781690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848332 ] 00:24:39.462 [2024-12-06 14:18:27.864060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.462 [2024-12-06 14:18:27.891806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.032 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.032 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:40.032 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:40.293 [2024-12-06 14:18:28.735378] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.y62da5yQbO': 0100666 00:24:40.293 [2024-12-06 14:18:28.735401] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:40.293 request: 00:24:40.293 { 00:24:40.293 "name": "key0", 00:24:40.293 "path": "/tmp/tmp.y62da5yQbO", 00:24:40.293 "method": "keyring_file_add_key", 00:24:40.293 "req_id": 1 00:24:40.293 } 00:24:40.293 Got JSON-RPC error response 00:24:40.293 response: 00:24:40.293 { 00:24:40.293 "code": -1, 00:24:40.293 "message": "Operation not permitted" 00:24:40.293 } 00:24:40.293 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.293 [2024-12-06 14:18:28.911890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.293 [2024-12-06 14:18:28.911918] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:40.293 request: 00:24:40.293 { 00:24:40.293 "name": "TLSTEST", 00:24:40.293 "trtype": "tcp", 00:24:40.293 "traddr": "10.0.0.2", 00:24:40.293 "adrfam": "ipv4", 00:24:40.293 "trsvcid": "4420", 00:24:40.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.293 "prchk_reftag": false, 00:24:40.293 "prchk_guard": false, 00:24:40.293 "hdgst": false, 00:24:40.293 "ddgst": false, 00:24:40.293 "psk": "key0", 00:24:40.293 "allow_unrecognized_csi": false, 00:24:40.293 "method": "bdev_nvme_attach_controller", 00:24:40.293 "req_id": 1 00:24:40.293 } 00:24:40.293 Got JSON-RPC error response 00:24:40.293 response: 00:24:40.293 { 00:24:40.293 "code": -126, 00:24:40.293 "message": "Required key not available" 00:24:40.293 } 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2848332 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2848332 ']' 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2848332 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.554 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848332 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848332' 00:24:40.554 killing process with pid 2848332 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2848332 00:24:40.554 Received shutdown signal, test time was about 10.000000 seconds 00:24:40.554 00:24:40.554 Latency(us) 00:24:40.554 [2024-12-06T13:18:29.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.554 [2024-12-06T13:18:29.194Z] =================================================================================================================== 00:24:40.554 [2024-12-06T13:18:29.194Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2848332 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2845631 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2845631 ']' 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2845631 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2845631 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2845631' 00:24:40.554 killing process with pid 2845631 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2845631 00:24:40.554 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2845631 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2848679 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2848679 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2848679 ']' 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.813 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.813 [2024-12-06 14:18:29.349497] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:40.813 [2024-12-06 14:18:29.349576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.813 [2024-12-06 14:18:29.442431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.071 [2024-12-06 14:18:29.471501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.071 [2024-12-06 14:18:29.471534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.071 [2024-12-06 14:18:29.471540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.071 [2024-12-06 14:18:29.471544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.071 [2024-12-06 14:18:29.471549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.071 [2024-12-06 14:18:29.472011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.y62da5yQbO 00:24:41.640 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.900 [2024-12-06 14:18:30.340609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.900 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:42.161 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:42.161 [2024-12-06 14:18:30.709507] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:42.161 [2024-12-06 14:18:30.709703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.161 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:42.421 malloc0 00:24:42.421 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.680 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:42.680 [2024-12-06 14:18:31.244466] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.y62da5yQbO': 0100666 00:24:42.680 [2024-12-06 14:18:31.244485] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:42.680 request: 00:24:42.680 { 00:24:42.680 "name": "key0", 00:24:42.680 "path": "/tmp/tmp.y62da5yQbO", 00:24:42.680 "method": "keyring_file_add_key", 00:24:42.680 "req_id": 1 00:24:42.680 } 00:24:42.680 Got JSON-RPC error response 00:24:42.680 response: 00:24:42.680 { 00:24:42.680 "code": -1, 00:24:42.680 "message": "Operation not permitted" 00:24:42.680 } 00:24:42.680 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.939 [2024-12-06 14:18:31.412925] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:42.939 [2024-12-06 14:18:31.412951] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:42.939 request: 00:24:42.939 { 00:24:42.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.939 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.939 "psk": "key0", 00:24:42.939 "method": "nvmf_subsystem_add_host", 00:24:42.939 "req_id": 1 00:24:42.939 } 00:24:42.939 Got JSON-RPC error response 00:24:42.939 response: 00:24:42.939 { 00:24:42.939 "code": -32603, 00:24:42.939 "message": "Internal error" 00:24:42.939 } 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2848679 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2848679 ']' 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2848679 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848679 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848679' 00:24:42.939 killing process with pid 2848679 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2848679 00:24:42.939 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2848679 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.y62da5yQbO 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2849060 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2849060 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2849060 ']' 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.199 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.200 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.200 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.200 [2024-12-06 14:18:31.653255] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:43.200 [2024-12-06 14:18:31.653310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.200 [2024-12-06 14:18:31.744241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.200 [2024-12-06 14:18:31.774099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.200 [2024-12-06 14:18:31.774125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.200 [2024-12-06 14:18:31.774131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.200 [2024-12-06 14:18:31.774135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.200 [2024-12-06 14:18:31.774139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.200 [2024-12-06 14:18:31.774599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.y62da5yQbO 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:44.141 [2024-12-06 14:18:32.631252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.141 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:44.402 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:44.402 [2024-12-06 14:18:32.992126] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.402 [2024-12-06 14:18:32.992312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.402 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:44.663 malloc0 00:24:44.663 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:44.922 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:44.922 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2849537 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2849537 /var/tmp/bdevperf.sock 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2849537 ']' 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.181 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.181 [2024-12-06 14:18:33.785280] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:45.181 [2024-12-06 14:18:33.785347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2849537 ] 00:24:45.440 [2024-12-06 14:18:33.873121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.440 [2024-12-06 14:18:33.909000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.010 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.010 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:46.010 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:24:46.270 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:46.531 [2024-12-06 14:18:34.909805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.531 TLSTESTn1 00:24:46.531 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:46.792 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:46.792 "subsystems": [ 00:24:46.792 { 00:24:46.792 "subsystem": "keyring", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "keyring_file_add_key", 00:24:46.792 "params": { 00:24:46.792 "name": "key0", 00:24:46.792 "path": "/tmp/tmp.y62da5yQbO" 00:24:46.792 } 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "iobuf", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "iobuf_set_options", 00:24:46.792 "params": { 00:24:46.792 "small_pool_count": 8192, 00:24:46.792 "large_pool_count": 1024, 00:24:46.792 "small_bufsize": 8192, 00:24:46.792 "large_bufsize": 135168, 00:24:46.792 "enable_numa": false 00:24:46.792 } 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "sock", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "sock_set_default_impl", 00:24:46.792 "params": { 00:24:46.792 "impl_name": "posix" 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "sock_impl_set_options", 00:24:46.792 "params": { 00:24:46.792 "impl_name": "ssl", 00:24:46.792 "recv_buf_size": 4096, 00:24:46.792 "send_buf_size": 4096, 00:24:46.792 "enable_recv_pipe": true, 00:24:46.792 "enable_quickack": false, 00:24:46.792 "enable_placement_id": 0, 00:24:46.792 "enable_zerocopy_send_server": true, 00:24:46.792 "enable_zerocopy_send_client": false, 00:24:46.792 "zerocopy_threshold": 0, 00:24:46.792 "tls_version": 0, 00:24:46.792 "enable_ktls": false 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "sock_impl_set_options", 00:24:46.792 "params": { 00:24:46.792 "impl_name": "posix", 00:24:46.792 "recv_buf_size": 2097152, 00:24:46.792 "send_buf_size": 2097152, 00:24:46.792 "enable_recv_pipe": true, 00:24:46.792 "enable_quickack": false, 00:24:46.792 "enable_placement_id": 0, 00:24:46.792 "enable_zerocopy_send_server": true, 00:24:46.792 "enable_zerocopy_send_client": false, 00:24:46.792 "zerocopy_threshold": 0, 00:24:46.792 "tls_version": 0, 00:24:46.792 "enable_ktls": false 00:24:46.792 } 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "vmd", 00:24:46.792 "config": [] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "accel", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "accel_set_options", 00:24:46.792 "params": { 00:24:46.792 "small_cache_size": 128, 00:24:46.792 "large_cache_size": 16, 00:24:46.792 "task_count": 2048, 00:24:46.792 "sequence_count": 2048, 00:24:46.792 "buf_count": 2048 00:24:46.792 } 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "bdev", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "bdev_set_options", 00:24:46.792 "params": { 00:24:46.792 "bdev_io_pool_size": 65535, 00:24:46.792 "bdev_io_cache_size": 256, 00:24:46.792 "bdev_auto_examine": true, 00:24:46.792 "iobuf_small_cache_size": 128, 00:24:46.792 "iobuf_large_cache_size": 16 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_raid_set_options", 00:24:46.792 "params": { 00:24:46.792 "process_window_size_kb": 1024, 00:24:46.792 "process_max_bandwidth_mb_sec": 0 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_iscsi_set_options", 00:24:46.792 "params": { 00:24:46.792 "timeout_sec": 30 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_nvme_set_options", 00:24:46.792 "params": { 00:24:46.792 "action_on_timeout": "none", 00:24:46.792 "timeout_us": 0, 00:24:46.792 "timeout_admin_us": 0, 00:24:46.792 "keep_alive_timeout_ms": 10000, 00:24:46.792 "arbitration_burst": 0, 00:24:46.792 "low_priority_weight": 0, 00:24:46.792 "medium_priority_weight": 0, 00:24:46.792 "high_priority_weight": 0, 00:24:46.792 "nvme_adminq_poll_period_us": 10000, 00:24:46.792 "nvme_ioq_poll_period_us": 0, 00:24:46.792 "io_queue_requests": 0, 00:24:46.792 "delay_cmd_submit": true, 00:24:46.792 "transport_retry_count": 4, 00:24:46.792 "bdev_retry_count": 3, 00:24:46.792 "transport_ack_timeout": 0, 00:24:46.792 "ctrlr_loss_timeout_sec": 0, 00:24:46.792 "reconnect_delay_sec": 0, 00:24:46.792 "fast_io_fail_timeout_sec": 0, 00:24:46.792 "disable_auto_failback": false, 00:24:46.792 "generate_uuids": false, 00:24:46.792 "transport_tos": 0, 00:24:46.792 "nvme_error_stat": false, 00:24:46.792 "rdma_srq_size": 0, 00:24:46.792 "io_path_stat": false, 00:24:46.792 "allow_accel_sequence": false, 00:24:46.792 "rdma_max_cq_size": 0, 00:24:46.792 "rdma_cm_event_timeout_ms": 0, 00:24:46.792 "dhchap_digests": [ 00:24:46.792 "sha256", 00:24:46.792 "sha384", 00:24:46.792 "sha512" 00:24:46.792 ], 00:24:46.792 "dhchap_dhgroups": [ 00:24:46.792 "null", 00:24:46.792 "ffdhe2048", 00:24:46.792 "ffdhe3072", 00:24:46.792 "ffdhe4096", 00:24:46.792 "ffdhe6144", 00:24:46.792 "ffdhe8192" 00:24:46.792 ] 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_nvme_set_hotplug", 00:24:46.792 "params": { 00:24:46.792 "period_us": 100000, 00:24:46.792 "enable": false 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_malloc_create", 00:24:46.792 "params": { 00:24:46.792 "name": "malloc0", 00:24:46.792 "num_blocks": 8192, 00:24:46.792 "block_size": 4096, 00:24:46.792 "physical_block_size": 4096, 00:24:46.792 "uuid": "d5f01069-d64f-4274-ad07-4307384e2f52", 00:24:46.792 "optimal_io_boundary": 0, 00:24:46.792 "md_size": 0, 00:24:46.792 "dif_type": 0, 00:24:46.792 "dif_is_head_of_md": false, 00:24:46.792 "dif_pi_format": 0 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "bdev_wait_for_examine" 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "nbd", 00:24:46.792 "config": [] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "scheduler", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "framework_set_scheduler", 00:24:46.792 "params": { 00:24:46.792 "name": "static" 00:24:46.792 } 00:24:46.792 } 00:24:46.792 ] 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "subsystem": "nvmf", 00:24:46.792 "config": [ 00:24:46.792 { 00:24:46.792 "method": "nvmf_set_config", 00:24:46.792 "params": { 00:24:46.792 "discovery_filter": "match_any", 00:24:46.792 "admin_cmd_passthru": { 00:24:46.792 "identify_ctrlr": false 00:24:46.792 }, 00:24:46.792 "dhchap_digests": [ 00:24:46.792 "sha256", 00:24:46.792 "sha384", 00:24:46.792 "sha512" 00:24:46.792 ], 00:24:46.792 "dhchap_dhgroups": [ 00:24:46.792 "null", 00:24:46.792 "ffdhe2048", 00:24:46.792 "ffdhe3072", 00:24:46.792 "ffdhe4096", 00:24:46.792 "ffdhe6144", 00:24:46.792 "ffdhe8192" 00:24:46.792 ] 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "nvmf_set_max_subsystems", 00:24:46.792 "params": { 00:24:46.792 "max_subsystems": 1024 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "nvmf_set_crdt", 00:24:46.792 "params": { 00:24:46.792 "crdt1": 0, 00:24:46.792 "crdt2": 0, 00:24:46.792 "crdt3": 0 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "nvmf_create_transport", 00:24:46.792 "params": { 00:24:46.792 "trtype": "TCP", 00:24:46.792 "max_queue_depth": 128, 00:24:46.792 "max_io_qpairs_per_ctrlr": 127, 00:24:46.792 "in_capsule_data_size": 4096, 00:24:46.792 "max_io_size": 131072, 00:24:46.792 "io_unit_size": 131072, 00:24:46.792 "max_aq_depth": 128, 00:24:46.792 "num_shared_buffers": 511, 00:24:46.792 "buf_cache_size": 4294967295, 00:24:46.792 "dif_insert_or_strip": false, 00:24:46.792 "zcopy": false, 00:24:46.792 "c2h_success": false, 00:24:46.792 "sock_priority": 0, 00:24:46.792 "abort_timeout_sec": 1, 00:24:46.792 "ack_timeout": 0, 00:24:46.792 "data_wr_pool_size": 0 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "nvmf_create_subsystem", 00:24:46.792 "params": { 00:24:46.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.792 "allow_any_host": false, 00:24:46.792 "serial_number": "SPDK00000000000001", 00:24:46.792 "model_number": "SPDK bdev Controller", 00:24:46.792 "max_namespaces": 10, 00:24:46.792 "min_cntlid": 1, 00:24:46.792 "max_cntlid": 65519, 00:24:46.792 "ana_reporting": false 00:24:46.792 } 00:24:46.792 }, 00:24:46.792 { 00:24:46.792 "method": "nvmf_subsystem_add_host", 00:24:46.792 "params": { 00:24:46.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.792 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.792 "psk": "key0" 00:24:46.792 } 00:24:46.793 }, 00:24:46.793 { 00:24:46.793 "method": "nvmf_subsystem_add_ns", 00:24:46.793 "params": { 00:24:46.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.793 "namespace": { 00:24:46.793 "nsid": 1, 00:24:46.793 "bdev_name": "malloc0", 00:24:46.793 "nguid": "D5F01069D64F4274AD074307384E2F52", 00:24:46.793 "uuid": "d5f01069-d64f-4274-ad07-4307384e2f52", 00:24:46.793 "no_auto_visible": false 00:24:46.793 } 00:24:46.793 } 00:24:46.793 }, 00:24:46.793 { 00:24:46.793 "method": "nvmf_subsystem_add_listener", 00:24:46.793 "params": { 00:24:46.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.793 "listen_address": { 00:24:46.793 "trtype": "TCP", 00:24:46.793 "adrfam": "IPv4", 00:24:46.793 "traddr": "10.0.0.2", 00:24:46.793 "trsvcid": "4420" 00:24:46.793 }, 00:24:46.793 "secure_channel": true 00:24:46.793 } 00:24:46.793 } 00:24:46.793 ] 00:24:46.793 } 00:24:46.793 ] 00:24:46.793 }' 00:24:46.793 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:47.052 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:47.052 "subsystems": [ 00:24:47.052 { 00:24:47.052 "subsystem": "keyring", 00:24:47.052 "config": [ 00:24:47.052 { 00:24:47.052 "method": "keyring_file_add_key", 00:24:47.052 "params": { 00:24:47.052 "name": "key0", 00:24:47.052 "path": "/tmp/tmp.y62da5yQbO" 00:24:47.052 } 00:24:47.052 } 00:24:47.052 ] 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "subsystem": "iobuf", 00:24:47.052 "config": [ 00:24:47.052 { 00:24:47.052 "method": "iobuf_set_options", 00:24:47.052 "params": { 00:24:47.052 "small_pool_count": 8192, 00:24:47.052 "large_pool_count": 1024, 00:24:47.052 "small_bufsize": 8192, 00:24:47.052 "large_bufsize": 135168, 00:24:47.052 "enable_numa": false 00:24:47.052 } 00:24:47.052 } 00:24:47.052 ] 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "subsystem": "sock", 00:24:47.052 "config": [ 00:24:47.052 { 00:24:47.052 "method": "sock_set_default_impl", 00:24:47.052 "params": { 00:24:47.052 "impl_name": "posix" 00:24:47.052 } 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "method": "sock_impl_set_options", 00:24:47.052 "params": { 00:24:47.052 "impl_name": "ssl", 00:24:47.052 "recv_buf_size": 4096, 00:24:47.052 "send_buf_size": 4096, 00:24:47.052 "enable_recv_pipe": true, 00:24:47.052 "enable_quickack": false, 00:24:47.052 "enable_placement_id": 0, 00:24:47.052 "enable_zerocopy_send_server": true, 00:24:47.052 "enable_zerocopy_send_client": false, 00:24:47.052 "zerocopy_threshold": 0, 00:24:47.052 "tls_version": 0, 00:24:47.052 "enable_ktls": false 00:24:47.052 } 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "method": "sock_impl_set_options", 00:24:47.052 "params": { 00:24:47.052 "impl_name": "posix", 00:24:47.052 "recv_buf_size": 2097152, 00:24:47.052 "send_buf_size": 2097152, 00:24:47.052 "enable_recv_pipe": true, 00:24:47.052 "enable_quickack": false, 00:24:47.052 "enable_placement_id": 0, 00:24:47.052 "enable_zerocopy_send_server": true, 00:24:47.052 "enable_zerocopy_send_client": false, 00:24:47.052 "zerocopy_threshold": 0, 00:24:47.052 "tls_version": 0, 00:24:47.052 "enable_ktls": false 00:24:47.052 } 00:24:47.052 } 00:24:47.052 ] 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "subsystem": "vmd", 00:24:47.052 "config": [] 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "subsystem": "accel", 00:24:47.052 "config": [ 00:24:47.052 { 00:24:47.052 "method": "accel_set_options", 00:24:47.052 "params": { 00:24:47.052 "small_cache_size": 128, 00:24:47.052 "large_cache_size": 16, 00:24:47.052 "task_count": 2048, 00:24:47.052 "sequence_count": 2048, 00:24:47.052 "buf_count": 2048 00:24:47.052 } 00:24:47.052 } 00:24:47.052 ] 00:24:47.052 }, 00:24:47.052 { 00:24:47.052 "subsystem": "bdev", 00:24:47.052 "config": [ 00:24:47.052 { 00:24:47.052 "method": "bdev_set_options", 00:24:47.052 "params": { 00:24:47.053 "bdev_io_pool_size": 65535, 00:24:47.053 "bdev_io_cache_size": 256, 00:24:47.053 "bdev_auto_examine": true, 00:24:47.053 "iobuf_small_cache_size": 128, 00:24:47.053 "iobuf_large_cache_size": 16 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_raid_set_options", 00:24:47.053 "params": { 00:24:47.053 "process_window_size_kb": 1024, 00:24:47.053 "process_max_bandwidth_mb_sec": 0 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_iscsi_set_options", 00:24:47.053 "params": { 00:24:47.053 "timeout_sec": 30 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_nvme_set_options", 00:24:47.053 "params": { 00:24:47.053 "action_on_timeout": "none", 00:24:47.053 "timeout_us": 0, 00:24:47.053 "timeout_admin_us": 0, 00:24:47.053 "keep_alive_timeout_ms": 10000, 00:24:47.053 "arbitration_burst": 0, 00:24:47.053 "low_priority_weight": 0, 00:24:47.053 "medium_priority_weight": 0, 00:24:47.053 "high_priority_weight": 0, 00:24:47.053 "nvme_adminq_poll_period_us": 10000, 00:24:47.053 "nvme_ioq_poll_period_us": 0, 00:24:47.053 "io_queue_requests": 512, 00:24:47.053 "delay_cmd_submit": true, 00:24:47.053 "transport_retry_count": 4, 00:24:47.053 "bdev_retry_count": 3, 00:24:47.053 "transport_ack_timeout": 0, 00:24:47.053 "ctrlr_loss_timeout_sec": 0, 00:24:47.053 "reconnect_delay_sec": 0, 00:24:47.053 "fast_io_fail_timeout_sec": 0, 00:24:47.053 "disable_auto_failback": false, 00:24:47.053 "generate_uuids": false, 00:24:47.053 "transport_tos": 0, 00:24:47.053 "nvme_error_stat": false, 00:24:47.053 "rdma_srq_size": 0, 00:24:47.053 "io_path_stat": false, 00:24:47.053 "allow_accel_sequence": false, 00:24:47.053 "rdma_max_cq_size": 0, 00:24:47.053 "rdma_cm_event_timeout_ms": 0, 00:24:47.053 "dhchap_digests": [ 00:24:47.053 "sha256", 00:24:47.053 "sha384", 00:24:47.053 "sha512" 00:24:47.053 ], 00:24:47.053 "dhchap_dhgroups": [ 00:24:47.053 "null", 00:24:47.053 "ffdhe2048", 00:24:47.053 "ffdhe3072", 00:24:47.053 "ffdhe4096", 00:24:47.053 "ffdhe6144", 00:24:47.053 "ffdhe8192" 00:24:47.053 ] 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_nvme_attach_controller", 00:24:47.053 "params": { 00:24:47.053 "name": "TLSTEST", 00:24:47.053 "trtype": "TCP", 00:24:47.053 "adrfam": "IPv4", 00:24:47.053 "traddr": "10.0.0.2", 00:24:47.053 "trsvcid": "4420", 00:24:47.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.053 "prchk_reftag": false, 00:24:47.053 "prchk_guard": false, 00:24:47.053 "ctrlr_loss_timeout_sec": 0, 00:24:47.053 "reconnect_delay_sec": 0, 00:24:47.053 "fast_io_fail_timeout_sec": 0, 00:24:47.053 "psk": "key0", 00:24:47.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.053 "hdgst": false, 00:24:47.053 "ddgst": false, 00:24:47.053 "multipath": "multipath" 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_nvme_set_hotplug", 00:24:47.053 "params": { 00:24:47.053 "period_us": 100000, 00:24:47.053 "enable": false 00:24:47.053 } 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "method": "bdev_wait_for_examine" 00:24:47.053 } 00:24:47.053 ] 00:24:47.053 }, 00:24:47.053 { 00:24:47.053 "subsystem": "nbd", 00:24:47.053 "config": [] 00:24:47.053 } 00:24:47.053 ] 00:24:47.053 }' 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2849537 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2849537 ']' 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2849537 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2849537 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2849537' 00:24:47.053 killing process with pid 2849537 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2849537 00:24:47.053 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.053 00:24:47.053 Latency(us) 00:24:47.053 [2024-12-06T13:18:35.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.053 [2024-12-06T13:18:35.693Z] =================================================================================================================== 00:24:47.053 [2024-12-06T13:18:35.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.053 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2849537 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2849060 ']' 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2849060' 00:24:47.313 killing process with pid 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2849060 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.313 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:47.313 "subsystems": [ 00:24:47.313 { 00:24:47.313 "subsystem": "keyring", 00:24:47.313 "config": [ 00:24:47.313 { 00:24:47.313 "method": "keyring_file_add_key", 00:24:47.313 "params": { 00:24:47.313 "name": "key0", 00:24:47.313 "path": "/tmp/tmp.y62da5yQbO" 00:24:47.313 } 00:24:47.313 } 00:24:47.313 ] 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "subsystem": "iobuf", 00:24:47.313 "config": [ 00:24:47.313 { 00:24:47.313 "method": "iobuf_set_options", 00:24:47.313 "params": { 00:24:47.313 "small_pool_count": 8192, 00:24:47.313 "large_pool_count": 1024, 00:24:47.313 "small_bufsize": 8192, 00:24:47.313 "large_bufsize": 135168, 00:24:47.313 "enable_numa": false 00:24:47.313 } 00:24:47.313 } 00:24:47.313 ] 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "subsystem": "sock", 00:24:47.313 "config": [ 00:24:47.313 { 00:24:47.313 "method": "sock_set_default_impl", 00:24:47.313 "params": { 00:24:47.313 "impl_name": "posix" 00:24:47.313 } 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "method": "sock_impl_set_options", 00:24:47.313 "params": { 00:24:47.313 "impl_name": "ssl", 00:24:47.313 "recv_buf_size": 4096, 00:24:47.313 "send_buf_size": 4096, 00:24:47.313 "enable_recv_pipe": true, 00:24:47.313 "enable_quickack": false, 00:24:47.313 "enable_placement_id": 0, 00:24:47.313 "enable_zerocopy_send_server": true, 00:24:47.313 "enable_zerocopy_send_client": false, 00:24:47.313 "zerocopy_threshold": 0, 00:24:47.313 "tls_version": 0, 00:24:47.313 "enable_ktls": false 00:24:47.313 } 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "method": "sock_impl_set_options", 00:24:47.313 "params": { 00:24:47.313 "impl_name": "posix", 00:24:47.313 "recv_buf_size": 2097152, 00:24:47.313 "send_buf_size": 2097152, 00:24:47.313 "enable_recv_pipe": true, 00:24:47.313 "enable_quickack": false, 00:24:47.313 "enable_placement_id": 0, 00:24:47.313 "enable_zerocopy_send_server": true, 00:24:47.313 "enable_zerocopy_send_client": false, 00:24:47.313 "zerocopy_threshold": 0, 00:24:47.313 "tls_version": 0, 00:24:47.313 "enable_ktls": false 00:24:47.313 } 00:24:47.313 } 00:24:47.313 ] 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "subsystem": "vmd", 00:24:47.313 "config": [] 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "subsystem": "accel", 00:24:47.313 "config": [ 00:24:47.313 { 00:24:47.313 "method": "accel_set_options", 00:24:47.313 "params": { 00:24:47.313 "small_cache_size": 128, 00:24:47.313 "large_cache_size": 16, 00:24:47.313 "task_count": 2048, 00:24:47.313 "sequence_count": 2048, 00:24:47.313 "buf_count": 2048 00:24:47.313 } 00:24:47.313 } 00:24:47.313 ] 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "subsystem": "bdev", 00:24:47.313 "config": [ 00:24:47.313 { 00:24:47.313 "method": "bdev_set_options", 00:24:47.313 "params": { 00:24:47.313 "bdev_io_pool_size": 65535, 00:24:47.313 "bdev_io_cache_size": 256, 00:24:47.313 "bdev_auto_examine": true, 00:24:47.313 "iobuf_small_cache_size": 128, 00:24:47.313 "iobuf_large_cache_size": 16 00:24:47.313 } 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "method": "bdev_raid_set_options", 00:24:47.313 "params": { 00:24:47.313 "process_window_size_kb": 1024, 00:24:47.313 "process_max_bandwidth_mb_sec": 0 00:24:47.313 } 00:24:47.313 }, 00:24:47.313 { 00:24:47.313 "method": "bdev_iscsi_set_options", 00:24:47.313 "params": { 00:24:47.313 "timeout_sec": 30 00:24:47.313 } 00:24:47.313 }, 00:24:47.313 { 00:24:47.314 "method": "bdev_nvme_set_options", 00:24:47.314 "params": { 00:24:47.314 "action_on_timeout": "none", 00:24:47.314 "timeout_us": 0, 00:24:47.314 "timeout_admin_us": 0, 00:24:47.314 "keep_alive_timeout_ms": 10000, 00:24:47.314 "arbitration_burst": 0, 00:24:47.314 "low_priority_weight": 0, 00:24:47.314 "medium_priority_weight": 0, 00:24:47.314 "high_priority_weight": 0, 00:24:47.314 "nvme_adminq_poll_period_us": 10000, 00:24:47.314 "nvme_ioq_poll_period_us": 0, 00:24:47.314 "io_queue_requests": 0, 00:24:47.314 "delay_cmd_submit": true, 00:24:47.314 "transport_retry_count": 4, 00:24:47.314 "bdev_retry_count": 3, 00:24:47.314 "transport_ack_timeout": 0, 00:24:47.314 "ctrlr_loss_timeout_sec": 0, 00:24:47.314 "reconnect_delay_sec": 0, 00:24:47.314 "fast_io_fail_timeout_sec": 0, 00:24:47.314 "disable_auto_failback": false, 00:24:47.314 "generate_uuids": false, 00:24:47.314 "transport_tos": 0, 00:24:47.314 "nvme_error_stat": false, 00:24:47.314 "rdma_srq_size": 0, 00:24:47.314 "io_path_stat": false, 00:24:47.314 "allow_accel_sequence": false, 00:24:47.314 "rdma_max_cq_size": 0, 00:24:47.314 "rdma_cm_event_timeout_ms": 0, 00:24:47.314 "dhchap_digests": [ 00:24:47.314 "sha256", 00:24:47.314 "sha384", 00:24:47.314 "sha512" 00:24:47.314 ], 00:24:47.314 "dhchap_dhgroups": [ 00:24:47.314 "null", 00:24:47.314 "ffdhe2048", 00:24:47.314 "ffdhe3072", 00:24:47.314 "ffdhe4096", 00:24:47.314 "ffdhe6144", 00:24:47.314 "ffdhe8192" 00:24:47.314 ] 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "bdev_nvme_set_hotplug", 00:24:47.314 "params": { 00:24:47.314 "period_us": 100000, 00:24:47.314 "enable": false 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "bdev_malloc_create", 00:24:47.314 "params": { 00:24:47.314 "name": "malloc0", 00:24:47.314 "num_blocks": 8192, 00:24:47.314 "block_size": 4096, 00:24:47.314 "physical_block_size": 4096, 00:24:47.314 "uuid": "d5f01069-d64f-4274-ad07-4307384e2f52", 00:24:47.314 "optimal_io_boundary": 0, 00:24:47.314 "md_size": 0, 00:24:47.314 "dif_type": 0, 00:24:47.314 "dif_is_head_of_md": false, 00:24:47.314 "dif_pi_format": 0 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "bdev_wait_for_examine" 00:24:47.314 } 00:24:47.314 ] 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "subsystem": "nbd", 00:24:47.314 "config": [] 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "subsystem": "scheduler", 00:24:47.314 "config": [ 00:24:47.314 { 00:24:47.314 "method": "framework_set_scheduler", 00:24:47.314 "params": { 00:24:47.314 "name": "static" 00:24:47.314 } 00:24:47.314 } 00:24:47.314 ] 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "subsystem": "nvmf", 00:24:47.314 "config": [ 00:24:47.314 { 00:24:47.314 "method": "nvmf_set_config", 00:24:47.314 "params": { 00:24:47.314 "discovery_filter": "match_any", 00:24:47.314 "admin_cmd_passthru": { 00:24:47.314 "identify_ctrlr": false 00:24:47.314 }, 00:24:47.314 "dhchap_digests": [ 00:24:47.314 "sha256", 00:24:47.314 "sha384", 00:24:47.314 "sha512" 00:24:47.314 ], 00:24:47.314 "dhchap_dhgroups": [ 00:24:47.314 "null", 00:24:47.314 "ffdhe2048", 00:24:47.314 "ffdhe3072", 00:24:47.314 "ffdhe4096", 00:24:47.314 "ffdhe6144", 00:24:47.314 "ffdhe8192" 00:24:47.314 ] 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_set_max_subsystems", 00:24:47.314 "params": { 00:24:47.314 "max_subsystems": 1024 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_set_crdt", 00:24:47.314 "params": { 00:24:47.314 "crdt1": 0, 00:24:47.314 "crdt2": 0, 00:24:47.314 "crdt3": 0 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_create_transport", 00:24:47.314 "params": { 00:24:47.314 "trtype": "TCP", 00:24:47.314 "max_queue_depth": 128, 00:24:47.314 "max_io_qpairs_per_ctrlr": 127, 00:24:47.314 "in_capsule_data_size": 4096, 00:24:47.314 "max_io_size": 131072, 00:24:47.314 "io_unit_size": 131072, 00:24:47.314 "max_aq_depth": 128, 00:24:47.314 "num_shared_buffers": 511, 00:24:47.314 "buf_cache_size": 4294967295, 00:24:47.314 "dif_insert_or_strip": false, 00:24:47.314 "zcopy": false, 00:24:47.314 "c2h_success": false, 00:24:47.314 "sock_priority": 0, 00:24:47.314 "abort_timeout_sec": 1, 00:24:47.314 "ack_timeout": 0, 00:24:47.314 "data_wr_pool_size": 0 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_create_subsystem", 00:24:47.314 "params": { 00:24:47.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.314 "allow_any_host": false, 00:24:47.314 "serial_number": "SPDK00000000000001", 00:24:47.314 "model_number": "SPDK bdev Controller", 00:24:47.314 "max_namespaces": 10, 00:24:47.314 "min_cntlid": 1, 00:24:47.314 "max_cntlid": 65519, 00:24:47.314 "ana_reporting": false 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_subsystem_add_host", 00:24:47.314 "params": { 00:24:47.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.314 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.314 "psk": "key0" 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_subsystem_add_ns", 00:24:47.314 "params": { 00:24:47.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.314 "namespace": { 00:24:47.314 "nsid": 1, 00:24:47.314 "bdev_name": "malloc0", 00:24:47.314 "nguid": "D5F01069D64F4274AD074307384E2F52", 00:24:47.314 "uuid": "d5f01069-d64f-4274-ad07-4307384e2f52", 00:24:47.314 "no_auto_visible": false 00:24:47.314 } 00:24:47.314 } 00:24:47.314 }, 00:24:47.314 { 00:24:47.314 "method": "nvmf_subsystem_add_listener", 00:24:47.314 "params": { 00:24:47.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.314 "listen_address": { 00:24:47.314 "trtype": "TCP", 00:24:47.314 "adrfam": "IPv4", 00:24:47.314 "traddr": "10.0.0.2", 00:24:47.314 "trsvcid": "4420" 00:24:47.314 }, 00:24:47.314 "secure_channel": true 00:24:47.314 } 00:24:47.314 } 00:24:47.314 ] 00:24:47.314 } 00:24:47.314 ] 00:24:47.314 }' 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2850098 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2850098 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2850098 ']' 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.314 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.314 [2024-12-06 14:18:35.943435] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:47.314 [2024-12-06 14:18:35.943499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.573 [2024-12-06 14:18:36.034208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.573 [2024-12-06 14:18:36.063988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.573 [2024-12-06 14:18:36.064015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.573 [2024-12-06 14:18:36.064021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.573 [2024-12-06 14:18:36.064026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.573 [2024-12-06 14:18:36.064031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.573 [2024-12-06 14:18:36.064513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.833 [2024-12-06 14:18:36.258094] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.833 [2024-12-06 14:18:36.290118] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.833 [2024-12-06 14:18:36.290320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.092 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.092 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:48.092 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.092 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.092 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2850130 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2850130 /var/tmp/bdevperf.sock 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2850130 ']' 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.353 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:48.353 "subsystems": [ 00:24:48.353 { 00:24:48.353 "subsystem": "keyring", 00:24:48.353 "config": [ 00:24:48.353 { 00:24:48.353 "method": "keyring_file_add_key", 00:24:48.353 "params": { 00:24:48.353 "name": "key0", 00:24:48.353 "path": "/tmp/tmp.y62da5yQbO" 00:24:48.353 } 00:24:48.353 } 00:24:48.353 ] 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "subsystem": "iobuf", 00:24:48.353 "config": [ 00:24:48.353 { 00:24:48.353 "method": "iobuf_set_options", 00:24:48.353 "params": { 00:24:48.353 "small_pool_count": 8192, 00:24:48.353 "large_pool_count": 1024, 00:24:48.353 "small_bufsize": 8192, 00:24:48.353 "large_bufsize": 135168, 00:24:48.353 "enable_numa": false 00:24:48.353 } 00:24:48.353 } 00:24:48.353 ] 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "subsystem": "sock", 00:24:48.353 "config": [ 00:24:48.353 { 00:24:48.353 "method": "sock_set_default_impl", 00:24:48.353 "params": { 00:24:48.353 "impl_name": "posix" 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "sock_impl_set_options", 00:24:48.353 "params": { 00:24:48.353 "impl_name": "ssl", 00:24:48.353 "recv_buf_size": 4096, 00:24:48.353 "send_buf_size": 4096, 00:24:48.353 "enable_recv_pipe": true, 00:24:48.353 "enable_quickack": false, 00:24:48.353 "enable_placement_id": 0, 00:24:48.353 "enable_zerocopy_send_server": true, 00:24:48.353 "enable_zerocopy_send_client": false, 00:24:48.353 "zerocopy_threshold": 0, 00:24:48.353 "tls_version": 0, 00:24:48.353 "enable_ktls": false 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "sock_impl_set_options", 00:24:48.353 "params": { 00:24:48.353 "impl_name": "posix", 00:24:48.353 "recv_buf_size": 2097152, 00:24:48.353 "send_buf_size": 2097152, 00:24:48.353 "enable_recv_pipe": true, 00:24:48.353 "enable_quickack": false, 00:24:48.353 "enable_placement_id": 0, 00:24:48.353 "enable_zerocopy_send_server": true, 00:24:48.353 "enable_zerocopy_send_client": false, 00:24:48.353 "zerocopy_threshold": 0, 00:24:48.353 "tls_version": 0, 00:24:48.353 "enable_ktls": false 00:24:48.353 } 00:24:48.353 } 00:24:48.353 ] 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "subsystem": "vmd", 00:24:48.353 "config": [] 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "subsystem": "accel", 00:24:48.353 "config": [ 00:24:48.353 { 00:24:48.353 "method": "accel_set_options", 00:24:48.353 "params": { 00:24:48.353 "small_cache_size": 128, 00:24:48.353 "large_cache_size": 16, 00:24:48.353 "task_count": 2048, 00:24:48.353 "sequence_count": 2048, 00:24:48.353 "buf_count": 2048 00:24:48.353 } 00:24:48.353 } 00:24:48.353 ] 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "subsystem": "bdev", 00:24:48.353 "config": [ 00:24:48.353 { 00:24:48.353 "method": "bdev_set_options", 00:24:48.353 "params": { 00:24:48.353 "bdev_io_pool_size": 65535, 00:24:48.353 "bdev_io_cache_size": 256, 00:24:48.353 "bdev_auto_examine": true, 00:24:48.353 "iobuf_small_cache_size": 128, 00:24:48.353 "iobuf_large_cache_size": 16 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "bdev_raid_set_options", 00:24:48.353 "params": { 00:24:48.353 "process_window_size_kb": 1024, 00:24:48.353 "process_max_bandwidth_mb_sec": 0 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "bdev_iscsi_set_options", 00:24:48.353 "params": { 00:24:48.353 "timeout_sec": 30 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "bdev_nvme_set_options", 00:24:48.353 "params": { 00:24:48.353 "action_on_timeout": "none", 00:24:48.353 "timeout_us": 0, 00:24:48.353 "timeout_admin_us": 0, 00:24:48.353 "keep_alive_timeout_ms": 10000, 00:24:48.353 "arbitration_burst": 0, 00:24:48.353 "low_priority_weight": 0, 00:24:48.353 "medium_priority_weight": 0, 00:24:48.353 "high_priority_weight": 0, 00:24:48.353 "nvme_adminq_poll_period_us": 10000, 00:24:48.353 "nvme_ioq_poll_period_us": 0, 00:24:48.353 "io_queue_requests": 512, 00:24:48.353 "delay_cmd_submit": true, 00:24:48.353 "transport_retry_count": 4, 00:24:48.353 "bdev_retry_count": 3, 00:24:48.353 "transport_ack_timeout": 0, 00:24:48.353 "ctrlr_loss_timeout_sec": 0, 00:24:48.353 "reconnect_delay_sec": 0, 00:24:48.353 "fast_io_fail_timeout_sec": 0, 00:24:48.353 "disable_auto_failback": false, 00:24:48.353 "generate_uuids": false, 00:24:48.353 "transport_tos": 0, 00:24:48.353 "nvme_error_stat": false, 00:24:48.353 "rdma_srq_size": 0, 00:24:48.353 "io_path_stat": false, 00:24:48.353 "allow_accel_sequence": false, 00:24:48.353 "rdma_max_cq_size": 0, 00:24:48.353 "rdma_cm_event_timeout_ms": 0, 00:24:48.353 "dhchap_digests": [ 00:24:48.353 "sha256", 00:24:48.353 "sha384", 00:24:48.353 "sha512" 00:24:48.353 ], 00:24:48.353 "dhchap_dhgroups": [ 00:24:48.353 "null", 00:24:48.353 "ffdhe2048", 00:24:48.353 "ffdhe3072", 00:24:48.353 "ffdhe4096", 00:24:48.353 "ffdhe6144", 00:24:48.353 "ffdhe8192" 00:24:48.353 ] 00:24:48.353 } 00:24:48.353 }, 00:24:48.353 { 00:24:48.353 "method": "bdev_nvme_attach_controller", 00:24:48.353 "params": { 00:24:48.354 "name": "TLSTEST", 00:24:48.354 "trtype": "TCP", 00:24:48.354 "adrfam": "IPv4", 00:24:48.354 "traddr": "10.0.0.2", 00:24:48.354 "trsvcid": "4420", 00:24:48.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.354 "prchk_reftag": false, 00:24:48.354 "prchk_guard": false, 00:24:48.354 "ctrlr_loss_timeout_sec": 0, 00:24:48.354 "reconnect_delay_sec": 0, 00:24:48.354 "fast_io_fail_timeout_sec": 0, 00:24:48.354 "psk": "key0", 00:24:48.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.354 "hdgst": false, 00:24:48.354 "ddgst": false, 00:24:48.354 "multipath": "multipath" 00:24:48.354 } 00:24:48.354 }, 00:24:48.354 { 00:24:48.354 "method": "bdev_nvme_set_hotplug", 00:24:48.354 "params": { 00:24:48.354 "period_us": 100000, 00:24:48.354 "enable": false 00:24:48.354 } 00:24:48.354 }, 00:24:48.354 { 00:24:48.354 "method": "bdev_wait_for_examine" 00:24:48.354 } 00:24:48.354 ] 00:24:48.354 }, 00:24:48.354 { 00:24:48.354 "subsystem": "nbd", 00:24:48.354 "config": [] 00:24:48.354 } 00:24:48.354 ] 00:24:48.354 }' 00:24:48.354 [2024-12-06 14:18:36.806143] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:48.354 [2024-12-06 14:18:36.806197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850130 ] 00:24:48.354 [2024-12-06 14:18:36.893374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.354 [2024-12-06 14:18:36.929141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.614 [2024-12-06 14:18:37.069904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.183 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.184 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:49.184 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:49.184 Running I/O for 10 seconds... 00:24:51.062 3877.00 IOPS, 15.14 MiB/s [2024-12-06T13:18:41.080Z] 4315.50 IOPS, 16.86 MiB/s [2024-12-06T13:18:42.020Z] 4650.33 IOPS, 18.17 MiB/s [2024-12-06T13:18:42.957Z] 4654.50 IOPS, 18.18 MiB/s [2024-12-06T13:18:43.897Z] 4778.80 IOPS, 18.67 MiB/s [2024-12-06T13:18:44.833Z] 5008.83 IOPS, 19.57 MiB/s [2024-12-06T13:18:45.865Z] 5097.14 IOPS, 19.91 MiB/s [2024-12-06T13:18:46.805Z] 5074.00 IOPS, 19.82 MiB/s [2024-12-06T13:18:47.743Z] 4974.33 IOPS, 19.43 MiB/s [2024-12-06T13:18:47.743Z] 5071.90 IOPS, 19.81 MiB/s 00:24:59.103 Latency(us) 00:24:59.103 [2024-12-06T13:18:47.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.103 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:59.103 Verification LBA range: start 0x0 length 0x2000 00:24:59.103 TLSTESTn1 : 10.01 5078.61 19.84 0.00 0.00 25168.95 4560.21 26542.08 00:24:59.103 [2024-12-06T13:18:47.743Z] =================================================================================================================== 00:24:59.103 [2024-12-06T13:18:47.743Z] Total : 5078.61 19.84 0.00 0.00 25168.95 4560.21 26542.08 00:24:59.103 { 00:24:59.103 "results": [ 00:24:59.103 { 00:24:59.103 "job": "TLSTESTn1", 00:24:59.103 "core_mask": "0x4", 00:24:59.103 "workload": "verify", 00:24:59.103 "status": "finished", 00:24:59.103 "verify_range": { 00:24:59.103 "start": 0, 00:24:59.103 "length": 8192 00:24:59.103 }, 00:24:59.103 "queue_depth": 128, 00:24:59.103 "io_size": 4096, 00:24:59.103 "runtime": 10.011802, 00:24:59.103 "iops": 5078.606228928618, 00:24:59.103 "mibps": 19.838305581752415, 00:24:59.103 "io_failed": 0, 00:24:59.103 "io_timeout": 0, 00:24:59.103 "avg_latency_us": 25168.947677300082, 00:24:59.103 "min_latency_us": 4560.213333333333, 00:24:59.103 "max_latency_us": 26542.08 00:24:59.103 } 00:24:59.103 ], 00:24:59.103 "core_count": 1 00:24:59.103 } 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2850130 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2850130 ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2850130 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850130 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850130' 00:24:59.365 killing process with pid 2850130 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2850130 00:24:59.365 Received shutdown signal, test time was about 10.000000 seconds 00:24:59.365 00:24:59.365 Latency(us) 00:24:59.365 [2024-12-06T13:18:48.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.365 [2024-12-06T13:18:48.005Z] =================================================================================================================== 00:24:59.365 [2024-12-06T13:18:48.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2850130 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2850098 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2850098 ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2850098 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850098 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850098' 00:24:59.365 killing process with pid 2850098 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2850098 00:24:59.365 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2850098 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2852473 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2852473 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2852473 ']' 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.627 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:59.627 [2024-12-06 14:18:48.165541] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:24:59.627 [2024-12-06 14:18:48.165596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.627 [2024-12-06 14:18:48.258586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.887 [2024-12-06 14:18:48.299609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.887 [2024-12-06 14:18:48.299656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.887 [2024-12-06 14:18:48.299665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.887 [2024-12-06 14:18:48.299678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.887 [2024-12-06 14:18:48.299684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.887 [2024-12-06 14:18:48.300348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.459 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.459 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:00.459 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.459 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.459 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.459 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.459 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.y62da5yQbO 00:25:00.459 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.y62da5yQbO 00:25:00.459 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:00.721 [2024-12-06 14:18:49.187495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.721 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:00.982 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:00.982 [2024-12-06 14:18:49.540374] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:00.982 [2024-12-06 14:18:49.540724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.982 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:01.243 malloc0 00:25:01.243 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:01.504 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:25:01.504 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2852840 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2852840 /var/tmp/bdevperf.sock 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2852840 ']' 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.764 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.765 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.765 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.765 [2024-12-06 14:18:50.359802] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:01.765 [2024-12-06 14:18:50.359875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852840 ] 00:25:02.052 [2024-12-06 14:18:50.447520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.052 [2024-12-06 14:18:50.483925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.624 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.624 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:02.624 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:25:02.884 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:02.884 [2024-12-06 14:18:51.459628] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.144 nvme0n1 00:25:03.144 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.144 Running I/O for 1 seconds... 00:25:04.085 4874.00 IOPS, 19.04 MiB/s 00:25:04.085 Latency(us) 00:25:04.085 [2024-12-06T13:18:52.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.085 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:04.085 Verification LBA range: start 0x0 length 0x2000 00:25:04.085 nvme0n1 : 1.02 4920.86 19.22 0.00 0.00 25830.71 5843.63 67283.63 00:25:04.085 [2024-12-06T13:18:52.725Z] =================================================================================================================== 00:25:04.085 [2024-12-06T13:18:52.725Z] Total : 4920.86 19.22 0.00 0.00 25830.71 5843.63 67283.63 00:25:04.085 { 00:25:04.085 "results": [ 00:25:04.085 { 00:25:04.085 "job": "nvme0n1", 00:25:04.085 "core_mask": "0x2", 00:25:04.085 "workload": "verify", 00:25:04.085 "status": "finished", 00:25:04.085 "verify_range": { 00:25:04.085 "start": 0, 00:25:04.085 "length": 8192 00:25:04.085 }, 00:25:04.085 "queue_depth": 128, 00:25:04.085 "io_size": 4096, 00:25:04.085 "runtime": 1.016489, 00:25:04.085 "iops": 4920.859940442051, 00:25:04.085 "mibps": 19.222109142351762, 00:25:04.085 "io_failed": 0, 00:25:04.085 "io_timeout": 0, 00:25:04.085 "avg_latency_us": 25830.70639477542, 00:25:04.085 "min_latency_us": 5843.626666666667, 00:25:04.085 "max_latency_us": 67283.62666666666 00:25:04.085 } 00:25:04.085 ], 00:25:04.085 "core_count": 1 00:25:04.085 } 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2852840 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2852840 ']' 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2852840 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.085 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852840 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852840' 00:25:04.345 killing process with pid 2852840 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2852840 00:25:04.345 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.345 00:25:04.345 Latency(us) 00:25:04.345 [2024-12-06T13:18:52.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.345 [2024-12-06T13:18:52.985Z] =================================================================================================================== 00:25:04.345 [2024-12-06T13:18:52.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2852840 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2852473 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2852473 ']' 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2852473 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852473 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.345 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.346 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852473' 00:25:04.346 killing process with pid 2852473 00:25:04.346 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2852473 00:25:04.346 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2852473 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2853403 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2853403 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2853403 ']' 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.608 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.608 [2024-12-06 14:18:53.117222] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:04.608 [2024-12-06 14:18:53.117280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.608 [2024-12-06 14:18:53.210697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.868 [2024-12-06 14:18:53.258172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.868 [2024-12-06 14:18:53.258227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.868 [2024-12-06 14:18:53.258242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.868 [2024-12-06 14:18:53.258249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.868 [2024-12-06 14:18:53.258255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.868 [2024-12-06 14:18:53.259001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.439 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.439 [2024-12-06 14:18:53.974367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.439 malloc0 00:25:05.439 [2024-12-06 14:18:54.004349] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.439 [2024-12-06 14:18:54.004686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2853546 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2853546 /var/tmp/bdevperf.sock 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2853546 ']' 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.439 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.699 [2024-12-06 14:18:54.087123] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:05.699 [2024-12-06 14:18:54.087187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853546 ] 00:25:05.699 [2024-12-06 14:18:54.175449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.699 [2024-12-06 14:18:54.209780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.269 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.269 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:06.269 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.y62da5yQbO 00:25:06.530 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:06.791 [2024-12-06 14:18:55.204759] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.791 nvme0n1 00:25:06.791 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.791 Running I/O for 1 seconds... 00:25:08.170 5317.00 IOPS, 20.77 MiB/s 00:25:08.170 Latency(us) 00:25:08.170 [2024-12-06T13:18:56.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.170 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:08.170 Verification LBA range: start 0x0 length 0x2000 00:25:08.170 nvme0n1 : 1.05 5189.54 20.27 0.00 0.00 24137.51 5816.32 54831.79 00:25:08.170 [2024-12-06T13:18:56.810Z] =================================================================================================================== 00:25:08.170 [2024-12-06T13:18:56.810Z] Total : 5189.54 20.27 0.00 0.00 24137.51 5816.32 54831.79 00:25:08.170 { 00:25:08.170 "results": [ 00:25:08.170 { 00:25:08.170 "job": "nvme0n1", 00:25:08.170 "core_mask": "0x2", 00:25:08.170 "workload": "verify", 00:25:08.170 "status": "finished", 00:25:08.170 "verify_range": { 00:25:08.170 "start": 0, 00:25:08.170 "length": 8192 00:25:08.170 }, 00:25:08.170 "queue_depth": 128, 00:25:08.170 "io_size": 4096, 00:25:08.170 "runtime": 1.049226, 00:25:08.170 "iops": 5189.539717849158, 00:25:08.170 "mibps": 20.27163952284827, 00:25:08.170 "io_failed": 0, 00:25:08.170 "io_timeout": 0, 00:25:08.170 "avg_latency_us": 24137.513608815425, 00:25:08.170 "min_latency_us": 5816.32, 00:25:08.170 "max_latency_us": 54831.78666666667 00:25:08.170 } 00:25:08.170 ], 00:25:08.170 "core_count": 1 00:25:08.170 } 00:25:08.170 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:08.170 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.170 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.170 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.170 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:08.170 "subsystems": [ 00:25:08.170 { 00:25:08.170 "subsystem": "keyring", 00:25:08.170 "config": [ 00:25:08.170 { 00:25:08.170 "method": "keyring_file_add_key", 00:25:08.170 "params": { 00:25:08.170 "name": "key0", 00:25:08.170 "path": "/tmp/tmp.y62da5yQbO" 00:25:08.170 } 00:25:08.170 } 00:25:08.170 ] 00:25:08.170 }, 00:25:08.170 { 00:25:08.170 "subsystem": "iobuf", 00:25:08.170 "config": [ 00:25:08.170 { 00:25:08.170 "method": "iobuf_set_options", 00:25:08.170 "params": { 00:25:08.170 "small_pool_count": 8192, 00:25:08.170 "large_pool_count": 1024, 00:25:08.170 "small_bufsize": 8192, 00:25:08.170 "large_bufsize": 135168, 00:25:08.170 "enable_numa": false 00:25:08.170 } 00:25:08.170 } 00:25:08.170 ] 00:25:08.170 }, 00:25:08.170 { 00:25:08.170 "subsystem": "sock", 00:25:08.170 "config": [ 00:25:08.170 { 00:25:08.170 "method": "sock_set_default_impl", 00:25:08.170 "params": { 00:25:08.170 "impl_name": "posix" 00:25:08.170 } 00:25:08.170 }, 00:25:08.170 { 00:25:08.170 "method": "sock_impl_set_options", 00:25:08.170 "params": { 00:25:08.170 "impl_name": "ssl", 00:25:08.170 "recv_buf_size": 4096, 00:25:08.170 "send_buf_size": 4096, 00:25:08.170 "enable_recv_pipe": true, 00:25:08.170 "enable_quickack": false, 00:25:08.171 "enable_placement_id": 0, 00:25:08.171 "enable_zerocopy_send_server": true, 00:25:08.171 "enable_zerocopy_send_client": false, 00:25:08.171 "zerocopy_threshold": 0, 00:25:08.171 "tls_version": 0, 00:25:08.171 "enable_ktls": false 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "sock_impl_set_options", 00:25:08.171 "params": { 00:25:08.171 "impl_name": "posix", 00:25:08.171 "recv_buf_size": 2097152, 00:25:08.171 "send_buf_size": 2097152, 00:25:08.171 "enable_recv_pipe": true, 00:25:08.171 "enable_quickack": false, 00:25:08.171 "enable_placement_id": 0, 00:25:08.171 "enable_zerocopy_send_server": true, 00:25:08.171 "enable_zerocopy_send_client": false, 00:25:08.171 "zerocopy_threshold": 0, 00:25:08.171 "tls_version": 0, 00:25:08.171 "enable_ktls": false 00:25:08.171 } 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "vmd", 00:25:08.171 "config": [] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "accel", 00:25:08.171 "config": [ 00:25:08.171 { 00:25:08.171 "method": "accel_set_options", 00:25:08.171 "params": { 00:25:08.171 "small_cache_size": 128, 00:25:08.171 "large_cache_size": 16, 00:25:08.171 "task_count": 2048, 00:25:08.171 "sequence_count": 2048, 00:25:08.171 "buf_count": 2048 00:25:08.171 } 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "bdev", 00:25:08.171 "config": [ 00:25:08.171 { 00:25:08.171 "method": "bdev_set_options", 00:25:08.171 "params": { 00:25:08.171 "bdev_io_pool_size": 65535, 00:25:08.171 "bdev_io_cache_size": 256, 00:25:08.171 "bdev_auto_examine": true, 00:25:08.171 "iobuf_small_cache_size": 128, 00:25:08.171 "iobuf_large_cache_size": 16 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_raid_set_options", 00:25:08.171 "params": { 00:25:08.171 "process_window_size_kb": 1024, 00:25:08.171 "process_max_bandwidth_mb_sec": 0 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_iscsi_set_options", 00:25:08.171 "params": { 00:25:08.171 "timeout_sec": 30 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_nvme_set_options", 00:25:08.171 "params": { 00:25:08.171 "action_on_timeout": "none", 00:25:08.171 "timeout_us": 0, 00:25:08.171 "timeout_admin_us": 0, 00:25:08.171 "keep_alive_timeout_ms": 10000, 00:25:08.171 "arbitration_burst": 0, 00:25:08.171 "low_priority_weight": 0, 00:25:08.171 "medium_priority_weight": 0, 00:25:08.171 "high_priority_weight": 0, 00:25:08.171 "nvme_adminq_poll_period_us": 10000, 00:25:08.171 "nvme_ioq_poll_period_us": 0, 00:25:08.171 "io_queue_requests": 0, 00:25:08.171 "delay_cmd_submit": true, 00:25:08.171 "transport_retry_count": 4, 00:25:08.171 "bdev_retry_count": 3, 00:25:08.171 "transport_ack_timeout": 0, 00:25:08.171 "ctrlr_loss_timeout_sec": 0, 00:25:08.171 "reconnect_delay_sec": 0, 00:25:08.171 "fast_io_fail_timeout_sec": 0, 00:25:08.171 "disable_auto_failback": false, 00:25:08.171 "generate_uuids": false, 00:25:08.171 "transport_tos": 0, 00:25:08.171 "nvme_error_stat": false, 00:25:08.171 "rdma_srq_size": 0, 00:25:08.171 "io_path_stat": false, 00:25:08.171 "allow_accel_sequence": false, 00:25:08.171 "rdma_max_cq_size": 0, 00:25:08.171 "rdma_cm_event_timeout_ms": 0, 00:25:08.171 "dhchap_digests": [ 00:25:08.171 "sha256", 00:25:08.171 "sha384", 00:25:08.171 "sha512" 00:25:08.171 ], 00:25:08.171 "dhchap_dhgroups": [ 00:25:08.171 "null", 00:25:08.171 "ffdhe2048", 00:25:08.171 "ffdhe3072", 00:25:08.171 "ffdhe4096", 00:25:08.171 "ffdhe6144", 00:25:08.171 "ffdhe8192" 00:25:08.171 ] 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_nvme_set_hotplug", 00:25:08.171 "params": { 00:25:08.171 "period_us": 100000, 00:25:08.171 "enable": false 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_malloc_create", 00:25:08.171 "params": { 00:25:08.171 "name": "malloc0", 00:25:08.171 "num_blocks": 8192, 00:25:08.171 "block_size": 4096, 00:25:08.171 "physical_block_size": 4096, 00:25:08.171 "uuid": "b303075f-b278-459f-879b-ab4fcf24e914", 00:25:08.171 "optimal_io_boundary": 0, 00:25:08.171 "md_size": 0, 00:25:08.171 "dif_type": 0, 00:25:08.171 "dif_is_head_of_md": false, 00:25:08.171 "dif_pi_format": 0 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "bdev_wait_for_examine" 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "nbd", 00:25:08.171 "config": [] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "scheduler", 00:25:08.171 "config": [ 00:25:08.171 { 00:25:08.171 "method": "framework_set_scheduler", 00:25:08.171 "params": { 00:25:08.171 "name": "static" 00:25:08.171 } 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "subsystem": "nvmf", 00:25:08.171 "config": [ 00:25:08.171 { 00:25:08.171 "method": "nvmf_set_config", 00:25:08.171 "params": { 00:25:08.171 "discovery_filter": "match_any", 00:25:08.171 "admin_cmd_passthru": { 00:25:08.171 "identify_ctrlr": false 00:25:08.171 }, 00:25:08.171 "dhchap_digests": [ 00:25:08.171 "sha256", 00:25:08.171 "sha384", 00:25:08.171 "sha512" 00:25:08.171 ], 00:25:08.171 "dhchap_dhgroups": [ 00:25:08.171 "null", 00:25:08.171 "ffdhe2048", 00:25:08.171 "ffdhe3072", 00:25:08.171 "ffdhe4096", 00:25:08.171 "ffdhe6144", 00:25:08.171 "ffdhe8192" 00:25:08.171 ] 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_set_max_subsystems", 00:25:08.171 "params": { 00:25:08.171 "max_subsystems": 1024 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_set_crdt", 00:25:08.171 "params": { 00:25:08.171 "crdt1": 0, 00:25:08.171 "crdt2": 0, 00:25:08.171 "crdt3": 0 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_create_transport", 00:25:08.171 "params": { 00:25:08.171 "trtype": "TCP", 00:25:08.171 "max_queue_depth": 128, 00:25:08.171 "max_io_qpairs_per_ctrlr": 127, 00:25:08.171 "in_capsule_data_size": 4096, 00:25:08.171 "max_io_size": 131072, 00:25:08.171 "io_unit_size": 131072, 00:25:08.171 "max_aq_depth": 128, 00:25:08.171 "num_shared_buffers": 511, 00:25:08.171 "buf_cache_size": 4294967295, 00:25:08.171 "dif_insert_or_strip": false, 00:25:08.171 "zcopy": false, 00:25:08.171 "c2h_success": false, 00:25:08.171 "sock_priority": 0, 00:25:08.171 "abort_timeout_sec": 1, 00:25:08.171 "ack_timeout": 0, 00:25:08.171 "data_wr_pool_size": 0 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_create_subsystem", 00:25:08.171 "params": { 00:25:08.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.171 "allow_any_host": false, 00:25:08.171 "serial_number": "00000000000000000000", 00:25:08.171 "model_number": "SPDK bdev Controller", 00:25:08.171 "max_namespaces": 32, 00:25:08.171 "min_cntlid": 1, 00:25:08.171 "max_cntlid": 65519, 00:25:08.171 "ana_reporting": false 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_subsystem_add_host", 00:25:08.171 "params": { 00:25:08.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.171 "host": "nqn.2016-06.io.spdk:host1", 00:25:08.171 "psk": "key0" 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_subsystem_add_ns", 00:25:08.171 "params": { 00:25:08.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.171 "namespace": { 00:25:08.171 "nsid": 1, 00:25:08.171 "bdev_name": "malloc0", 00:25:08.171 "nguid": "B303075FB278459F879BAB4FCF24E914", 00:25:08.171 "uuid": "b303075f-b278-459f-879b-ab4fcf24e914", 00:25:08.171 "no_auto_visible": false 00:25:08.171 } 00:25:08.171 } 00:25:08.171 }, 00:25:08.171 { 00:25:08.171 "method": "nvmf_subsystem_add_listener", 00:25:08.171 "params": { 00:25:08.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.171 "listen_address": { 00:25:08.171 "trtype": "TCP", 00:25:08.171 "adrfam": "IPv4", 00:25:08.171 "traddr": "10.0.0.2", 00:25:08.171 "trsvcid": "4420" 00:25:08.171 }, 00:25:08.171 "secure_channel": false, 00:25:08.171 "sock_impl": "ssl" 00:25:08.171 } 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 } 00:25:08.171 ] 00:25:08.171 }' 00:25:08.171 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:08.432 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:08.432 "subsystems": [ 00:25:08.432 { 00:25:08.432 "subsystem": "keyring", 00:25:08.432 "config": [ 00:25:08.432 { 00:25:08.432 "method": "keyring_file_add_key", 00:25:08.432 "params": { 00:25:08.432 "name": "key0", 00:25:08.432 "path": "/tmp/tmp.y62da5yQbO" 00:25:08.432 } 00:25:08.432 } 00:25:08.432 ] 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "subsystem": "iobuf", 00:25:08.432 "config": [ 00:25:08.432 { 00:25:08.432 "method": "iobuf_set_options", 00:25:08.432 "params": { 00:25:08.432 "small_pool_count": 8192, 00:25:08.432 "large_pool_count": 1024, 00:25:08.432 "small_bufsize": 8192, 00:25:08.432 "large_bufsize": 135168, 00:25:08.432 "enable_numa": false 00:25:08.432 } 00:25:08.432 } 00:25:08.432 ] 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "subsystem": "sock", 00:25:08.432 "config": [ 00:25:08.432 { 00:25:08.432 "method": "sock_set_default_impl", 00:25:08.432 "params": { 00:25:08.432 "impl_name": "posix" 00:25:08.432 } 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "method": "sock_impl_set_options", 00:25:08.432 "params": { 00:25:08.432 "impl_name": "ssl", 00:25:08.432 "recv_buf_size": 4096, 00:25:08.432 "send_buf_size": 4096, 00:25:08.432 "enable_recv_pipe": true, 00:25:08.432 "enable_quickack": false, 00:25:08.432 "enable_placement_id": 0, 00:25:08.432 "enable_zerocopy_send_server": true, 00:25:08.432 "enable_zerocopy_send_client": false, 00:25:08.432 "zerocopy_threshold": 0, 00:25:08.432 "tls_version": 0, 00:25:08.432 "enable_ktls": false 00:25:08.432 } 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "method": "sock_impl_set_options", 00:25:08.432 "params": { 00:25:08.432 "impl_name": "posix", 00:25:08.432 "recv_buf_size": 2097152, 00:25:08.432 "send_buf_size": 2097152, 00:25:08.432 "enable_recv_pipe": true, 00:25:08.432 "enable_quickack": false, 00:25:08.432 "enable_placement_id": 0, 00:25:08.432 "enable_zerocopy_send_server": true, 00:25:08.432 "enable_zerocopy_send_client": false, 00:25:08.432 "zerocopy_threshold": 0, 00:25:08.432 "tls_version": 0, 00:25:08.432 "enable_ktls": false 00:25:08.432 } 00:25:08.432 } 00:25:08.432 ] 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "subsystem": "vmd", 00:25:08.432 "config": [] 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "subsystem": "accel", 00:25:08.432 "config": [ 00:25:08.432 { 00:25:08.432 "method": "accel_set_options", 00:25:08.432 "params": { 00:25:08.432 "small_cache_size": 128, 00:25:08.432 "large_cache_size": 16, 00:25:08.432 "task_count": 2048, 00:25:08.432 "sequence_count": 2048, 00:25:08.432 "buf_count": 2048 00:25:08.432 } 00:25:08.432 } 00:25:08.432 ] 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "subsystem": "bdev", 00:25:08.432 "config": [ 00:25:08.432 { 00:25:08.432 "method": "bdev_set_options", 00:25:08.432 "params": { 00:25:08.432 "bdev_io_pool_size": 65535, 00:25:08.432 "bdev_io_cache_size": 256, 00:25:08.432 "bdev_auto_examine": true, 00:25:08.432 "iobuf_small_cache_size": 128, 00:25:08.432 "iobuf_large_cache_size": 16 00:25:08.432 } 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "method": "bdev_raid_set_options", 00:25:08.432 "params": { 00:25:08.432 "process_window_size_kb": 1024, 00:25:08.432 "process_max_bandwidth_mb_sec": 0 00:25:08.432 } 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "method": "bdev_iscsi_set_options", 00:25:08.432 "params": { 00:25:08.432 "timeout_sec": 30 00:25:08.432 } 00:25:08.432 }, 00:25:08.432 { 00:25:08.432 "method": "bdev_nvme_set_options", 00:25:08.432 "params": { 00:25:08.432 "action_on_timeout": "none", 00:25:08.433 "timeout_us": 0, 00:25:08.433 "timeout_admin_us": 0, 00:25:08.433 "keep_alive_timeout_ms": 10000, 00:25:08.433 "arbitration_burst": 0, 00:25:08.433 "low_priority_weight": 0, 00:25:08.433 "medium_priority_weight": 0, 00:25:08.433 "high_priority_weight": 0, 00:25:08.433 "nvme_adminq_poll_period_us": 10000, 00:25:08.433 "nvme_ioq_poll_period_us": 0, 00:25:08.433 "io_queue_requests": 512, 00:25:08.433 "delay_cmd_submit": true, 00:25:08.433 "transport_retry_count": 4, 00:25:08.433 "bdev_retry_count": 3, 00:25:08.433 "transport_ack_timeout": 0, 00:25:08.433 "ctrlr_loss_timeout_sec": 0, 00:25:08.433 "reconnect_delay_sec": 0, 00:25:08.433 "fast_io_fail_timeout_sec": 0, 00:25:08.433 "disable_auto_failback": false, 00:25:08.433 "generate_uuids": false, 00:25:08.433 "transport_tos": 0, 00:25:08.433 "nvme_error_stat": false, 00:25:08.433 "rdma_srq_size": 0, 00:25:08.433 "io_path_stat": false, 00:25:08.433 "allow_accel_sequence": false, 00:25:08.433 "rdma_max_cq_size": 0, 00:25:08.433 "rdma_cm_event_timeout_ms": 0, 00:25:08.433 "dhchap_digests": [ 00:25:08.433 "sha256", 00:25:08.433 "sha384", 00:25:08.433 "sha512" 00:25:08.433 ], 00:25:08.433 "dhchap_dhgroups": [ 00:25:08.433 "null", 00:25:08.433 "ffdhe2048", 00:25:08.433 "ffdhe3072", 00:25:08.433 "ffdhe4096", 00:25:08.433 "ffdhe6144", 00:25:08.433 "ffdhe8192" 00:25:08.433 ] 00:25:08.433 } 00:25:08.433 }, 00:25:08.433 { 00:25:08.433 "method": "bdev_nvme_attach_controller", 00:25:08.433 "params": { 00:25:08.433 "name": "nvme0", 00:25:08.433 "trtype": "TCP", 00:25:08.433 "adrfam": "IPv4", 00:25:08.433 "traddr": "10.0.0.2", 00:25:08.433 "trsvcid": "4420", 00:25:08.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.433 "prchk_reftag": false, 00:25:08.433 "prchk_guard": false, 00:25:08.433 "ctrlr_loss_timeout_sec": 0, 00:25:08.433 "reconnect_delay_sec": 0, 00:25:08.433 "fast_io_fail_timeout_sec": 0, 00:25:08.433 "psk": "key0", 00:25:08.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.433 "hdgst": false, 00:25:08.433 "ddgst": false, 00:25:08.433 "multipath": "multipath" 00:25:08.433 } 00:25:08.433 }, 00:25:08.433 { 00:25:08.433 "method": "bdev_nvme_set_hotplug", 00:25:08.433 "params": { 00:25:08.433 "period_us": 100000, 00:25:08.433 "enable": false 00:25:08.433 } 00:25:08.433 }, 00:25:08.433 { 00:25:08.433 "method": "bdev_enable_histogram", 00:25:08.433 "params": { 00:25:08.433 "name": "nvme0n1", 00:25:08.433 "enable": true 00:25:08.433 } 00:25:08.433 }, 00:25:08.433 { 00:25:08.433 "method": "bdev_wait_for_examine" 00:25:08.433 } 00:25:08.433 ] 00:25:08.433 }, 00:25:08.433 { 00:25:08.433 "subsystem": "nbd", 00:25:08.433 "config": [] 00:25:08.433 } 00:25:08.433 ] 00:25:08.433 }' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2853546 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2853546 ']' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2853546 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853546 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853546' 00:25:08.433 killing process with pid 2853546 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2853546 00:25:08.433 Received shutdown signal, test time was about 1.000000 seconds 00:25:08.433 00:25:08.433 Latency(us) 00:25:08.433 [2024-12-06T13:18:57.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.433 [2024-12-06T13:18:57.073Z] =================================================================================================================== 00:25:08.433 [2024-12-06T13:18:57.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2853546 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2853403 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2853403 ']' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2853403 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.433 14:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853403 00:25:08.433 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.433 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.433 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853403' 00:25:08.433 killing process with pid 2853403 00:25:08.433 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2853403 00:25:08.433 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2853403 00:25:08.693 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:08.693 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.693 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.693 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:08.693 "subsystems": [ 00:25:08.693 { 00:25:08.693 "subsystem": "keyring", 00:25:08.693 "config": [ 00:25:08.693 { 00:25:08.693 "method": "keyring_file_add_key", 00:25:08.693 "params": { 00:25:08.693 "name": "key0", 00:25:08.693 "path": "/tmp/tmp.y62da5yQbO" 00:25:08.693 } 00:25:08.693 } 00:25:08.693 ] 00:25:08.693 }, 00:25:08.693 { 00:25:08.693 "subsystem": "iobuf", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "iobuf_set_options", 00:25:08.694 "params": { 00:25:08.694 "small_pool_count": 8192, 00:25:08.694 "large_pool_count": 1024, 00:25:08.694 "small_bufsize": 8192, 00:25:08.694 "large_bufsize": 135168, 00:25:08.694 "enable_numa": false 00:25:08.694 } 00:25:08.694 } 00:25:08.694 ] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "sock", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "sock_set_default_impl", 00:25:08.694 "params": { 00:25:08.694 "impl_name": "posix" 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "sock_impl_set_options", 00:25:08.694 "params": { 00:25:08.694 "impl_name": "ssl", 00:25:08.694 "recv_buf_size": 4096, 00:25:08.694 "send_buf_size": 4096, 00:25:08.694 "enable_recv_pipe": true, 00:25:08.694 "enable_quickack": false, 00:25:08.694 "enable_placement_id": 0, 00:25:08.694 "enable_zerocopy_send_server": true, 00:25:08.694 "enable_zerocopy_send_client": false, 00:25:08.694 "zerocopy_threshold": 0, 00:25:08.694 "tls_version": 0, 00:25:08.694 "enable_ktls": false 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "sock_impl_set_options", 00:25:08.694 "params": { 00:25:08.694 "impl_name": "posix", 00:25:08.694 "recv_buf_size": 2097152, 00:25:08.694 "send_buf_size": 2097152, 00:25:08.694 "enable_recv_pipe": true, 00:25:08.694 "enable_quickack": false, 00:25:08.694 "enable_placement_id": 0, 00:25:08.694 "enable_zerocopy_send_server": true, 00:25:08.694 "enable_zerocopy_send_client": false, 00:25:08.694 "zerocopy_threshold": 0, 00:25:08.694 "tls_version": 0, 00:25:08.694 "enable_ktls": false 00:25:08.694 } 00:25:08.694 } 00:25:08.694 ] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "vmd", 00:25:08.694 "config": [] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "accel", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "accel_set_options", 00:25:08.694 "params": { 00:25:08.694 "small_cache_size": 128, 00:25:08.694 "large_cache_size": 16, 00:25:08.694 "task_count": 2048, 00:25:08.694 "sequence_count": 2048, 00:25:08.694 "buf_count": 2048 00:25:08.694 } 00:25:08.694 } 00:25:08.694 ] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "bdev", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "bdev_set_options", 00:25:08.694 "params": { 00:25:08.694 "bdev_io_pool_size": 65535, 00:25:08.694 "bdev_io_cache_size": 256, 00:25:08.694 "bdev_auto_examine": true, 00:25:08.694 "iobuf_small_cache_size": 128, 00:25:08.694 "iobuf_large_cache_size": 16 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_raid_set_options", 00:25:08.694 "params": { 00:25:08.694 "process_window_size_kb": 1024, 00:25:08.694 "process_max_bandwidth_mb_sec": 0 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_iscsi_set_options", 00:25:08.694 "params": { 00:25:08.694 "timeout_sec": 30 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_nvme_set_options", 00:25:08.694 "params": { 00:25:08.694 "action_on_timeout": "none", 00:25:08.694 "timeout_us": 0, 00:25:08.694 "timeout_admin_us": 0, 00:25:08.694 "keep_alive_timeout_ms": 10000, 00:25:08.694 "arbitration_burst": 0, 00:25:08.694 "low_priority_weight": 0, 00:25:08.694 "medium_priority_weight": 0, 00:25:08.694 "high_priority_weight": 0, 00:25:08.694 "nvme_adminq_poll_period_us": 10000, 00:25:08.694 "nvme_ioq_poll_period_us": 0, 00:25:08.694 "io_queue_requests": 0, 00:25:08.694 "delay_cmd_submit": true, 00:25:08.694 "transport_retry_count": 4, 00:25:08.694 "bdev_retry_count": 3, 00:25:08.694 "transport_ack_timeout": 0, 00:25:08.694 "ctrlr_loss_timeout_sec": 0, 00:25:08.694 "reconnect_delay_sec": 0, 00:25:08.694 "fast_io_fail_timeout_sec": 0, 00:25:08.694 "disable_auto_failback": false, 00:25:08.694 "generate_uuids": false, 00:25:08.694 "transport_tos": 0, 00:25:08.694 "nvme_error_stat": false, 00:25:08.694 "rdma_srq_size": 0, 00:25:08.694 "io_path_stat": false, 00:25:08.694 "allow_accel_sequence": false, 00:25:08.694 "rdma_max_cq_size": 0, 00:25:08.694 "rdma_cm_event_timeout_ms": 0, 00:25:08.694 "dhchap_digests": [ 00:25:08.694 "sha256", 00:25:08.694 "sha384", 00:25:08.694 "sha512" 00:25:08.694 ], 00:25:08.694 "dhchap_dhgroups": [ 00:25:08.694 "null", 00:25:08.694 "ffdhe2048", 00:25:08.694 "ffdhe3072", 00:25:08.694 "ffdhe4096", 00:25:08.694 "ffdhe6144", 00:25:08.694 "ffdhe8192" 00:25:08.694 ] 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_nvme_set_hotplug", 00:25:08.694 "params": { 00:25:08.694 "period_us": 100000, 00:25:08.694 "enable": false 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_malloc_create", 00:25:08.694 "params": { 00:25:08.694 "name": "malloc0", 00:25:08.694 "num_blocks": 8192, 00:25:08.694 "block_size": 4096, 00:25:08.694 "physical_block_size": 4096, 00:25:08.694 "uuid": "b303075f-b278-459f-879b-ab4fcf24e914", 00:25:08.694 "optimal_io_boundary": 0, 00:25:08.694 "md_size": 0, 00:25:08.694 "dif_type": 0, 00:25:08.694 "dif_is_head_of_md": false, 00:25:08.694 "dif_pi_format": 0 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "bdev_wait_for_examine" 00:25:08.694 } 00:25:08.694 ] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "nbd", 00:25:08.694 "config": [] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "scheduler", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "framework_set_scheduler", 00:25:08.694 "params": { 00:25:08.694 "name": "static" 00:25:08.694 } 00:25:08.694 } 00:25:08.694 ] 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "subsystem": "nvmf", 00:25:08.694 "config": [ 00:25:08.694 { 00:25:08.694 "method": "nvmf_set_config", 00:25:08.694 "params": { 00:25:08.694 "discovery_filter": "match_any", 00:25:08.694 "admin_cmd_passthru": { 00:25:08.694 "identify_ctrlr": false 00:25:08.694 }, 00:25:08.694 "dhchap_digests": [ 00:25:08.694 "sha256", 00:25:08.694 "sha384", 00:25:08.694 "sha512" 00:25:08.694 ], 00:25:08.694 "dhchap_dhgroups": [ 00:25:08.694 "null", 00:25:08.694 "ffdhe2048", 00:25:08.694 "ffdhe3072", 00:25:08.694 "ffdhe4096", 00:25:08.694 "ffdhe6144", 00:25:08.694 "ffdhe8192" 00:25:08.694 ] 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_set_max_subsystems", 00:25:08.694 "params": { 00:25:08.694 "max_subsystems": 1024 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_set_crdt", 00:25:08.694 "params": { 00:25:08.694 "crdt1": 0, 00:25:08.694 "crdt2": 0, 00:25:08.694 "crdt3": 0 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_create_transport", 00:25:08.694 "params": { 00:25:08.694 "trtype": "TCP", 00:25:08.694 "max_queue_depth": 128, 00:25:08.694 "max_io_qpairs_per_ctrlr": 127, 00:25:08.694 "in_capsule_data_size": 4096, 00:25:08.694 "max_io_size": 131072, 00:25:08.694 "io_unit_size": 131072, 00:25:08.694 "max_aq_depth": 128, 00:25:08.694 "num_shared_buffers": 511, 00:25:08.694 "buf_cache_size": 4294967295, 00:25:08.694 "dif_insert_or_strip": false, 00:25:08.694 "zcopy": false, 00:25:08.694 "c2h_success": false, 00:25:08.694 "sock_priority": 0, 00:25:08.694 "abort_timeout_sec": 1, 00:25:08.694 "ack_timeout": 0, 00:25:08.694 "data_wr_pool_size": 0 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_create_subsystem", 00:25:08.694 "params": { 00:25:08.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.694 "allow_any_host": false, 00:25:08.694 "serial_number": "00000000000000000000", 00:25:08.694 "model_number": "SPDK bdev Controller", 00:25:08.694 "max_namespaces": 32, 00:25:08.694 "min_cntlid": 1, 00:25:08.694 "max_cntlid": 65519, 00:25:08.694 "ana_reporting": false 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_subsystem_add_host", 00:25:08.694 "params": { 00:25:08.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.694 "host": "nqn.2016-06.io.spdk:host1", 00:25:08.694 "psk": "key0" 00:25:08.694 } 00:25:08.694 }, 00:25:08.694 { 00:25:08.694 "method": "nvmf_subsystem_add_ns", 00:25:08.694 "params": { 00:25:08.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.694 "namespace": { 00:25:08.694 "nsid": 1, 00:25:08.694 "bdev_name": "malloc0", 00:25:08.694 "nguid": "B303075FB278459F879BAB4FCF24E914", 00:25:08.695 "uuid": "b303075f-b278-459f-879b-ab4fcf24e914", 00:25:08.695 "no_auto_visible": false 00:25:08.695 } 00:25:08.695 } 00:25:08.695 }, 00:25:08.695 { 00:25:08.695 "method": "nvmf_subsystem_add_listener", 00:25:08.695 "params": { 00:25:08.695 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.695 "listen_address": { 00:25:08.695 "trtype": "TCP", 00:25:08.695 "adrfam": "IPv4", 00:25:08.695 "traddr": "10.0.0.2", 00:25:08.695 "trsvcid": "4420" 00:25:08.695 }, 00:25:08.695 "secure_channel": false, 00:25:08.695 "sock_impl": "ssl" 00:25:08.695 } 00:25:08.695 } 00:25:08.695 ] 00:25:08.695 } 00:25:08.695 ] 00:25:08.695 }' 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2854229 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2854229 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2854229 ']' 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.695 14:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.695 [2024-12-06 14:18:57.222312] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:08.695 [2024-12-06 14:18:57.222366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.695 [2024-12-06 14:18:57.311103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.954 [2024-12-06 14:18:57.341845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.954 [2024-12-06 14:18:57.341878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.954 [2024-12-06 14:18:57.341884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.954 [2024-12-06 14:18:57.341889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.954 [2024-12-06 14:18:57.341893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.954 [2024-12-06 14:18:57.342379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.954 [2024-12-06 14:18:57.536695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.954 [2024-12-06 14:18:57.568725] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:08.954 [2024-12-06 14:18:57.568920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2854368 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2854368 /var/tmp/bdevperf.sock 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2854368 ']' 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.524 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:09.525 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.525 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.525 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:09.525 "subsystems": [ 00:25:09.525 { 00:25:09.525 "subsystem": "keyring", 00:25:09.525 "config": [ 00:25:09.525 { 00:25:09.525 "method": "keyring_file_add_key", 00:25:09.525 "params": { 00:25:09.525 "name": "key0", 00:25:09.525 "path": "/tmp/tmp.y62da5yQbO" 00:25:09.525 } 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "iobuf", 00:25:09.525 "config": [ 00:25:09.525 { 00:25:09.525 "method": "iobuf_set_options", 00:25:09.525 "params": { 00:25:09.525 "small_pool_count": 8192, 00:25:09.525 "large_pool_count": 1024, 00:25:09.525 "small_bufsize": 8192, 00:25:09.525 "large_bufsize": 135168, 00:25:09.525 "enable_numa": false 00:25:09.525 } 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "sock", 00:25:09.525 "config": [ 00:25:09.525 { 00:25:09.525 "method": "sock_set_default_impl", 00:25:09.525 "params": { 00:25:09.525 "impl_name": "posix" 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "sock_impl_set_options", 00:25:09.525 "params": { 00:25:09.525 "impl_name": "ssl", 00:25:09.525 "recv_buf_size": 4096, 00:25:09.525 "send_buf_size": 4096, 00:25:09.525 "enable_recv_pipe": true, 00:25:09.525 "enable_quickack": false, 00:25:09.525 "enable_placement_id": 0, 00:25:09.525 "enable_zerocopy_send_server": true, 00:25:09.525 "enable_zerocopy_send_client": false, 00:25:09.525 "zerocopy_threshold": 0, 00:25:09.525 "tls_version": 0, 00:25:09.525 "enable_ktls": false 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "sock_impl_set_options", 00:25:09.525 "params": { 00:25:09.525 "impl_name": "posix", 00:25:09.525 "recv_buf_size": 2097152, 00:25:09.525 "send_buf_size": 2097152, 00:25:09.525 "enable_recv_pipe": true, 00:25:09.525 "enable_quickack": false, 00:25:09.525 "enable_placement_id": 0, 00:25:09.525 "enable_zerocopy_send_server": true, 00:25:09.525 "enable_zerocopy_send_client": false, 00:25:09.525 "zerocopy_threshold": 0, 00:25:09.525 "tls_version": 0, 00:25:09.525 "enable_ktls": false 00:25:09.525 } 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "vmd", 00:25:09.525 "config": [] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "accel", 00:25:09.525 "config": [ 00:25:09.525 { 00:25:09.525 "method": "accel_set_options", 00:25:09.525 "params": { 00:25:09.525 "small_cache_size": 128, 00:25:09.525 "large_cache_size": 16, 00:25:09.525 "task_count": 2048, 00:25:09.525 "sequence_count": 2048, 00:25:09.525 "buf_count": 2048 00:25:09.525 } 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "bdev", 00:25:09.525 "config": [ 00:25:09.525 { 00:25:09.525 "method": "bdev_set_options", 00:25:09.525 "params": { 00:25:09.525 "bdev_io_pool_size": 65535, 00:25:09.525 "bdev_io_cache_size": 256, 00:25:09.525 "bdev_auto_examine": true, 00:25:09.525 "iobuf_small_cache_size": 128, 00:25:09.525 "iobuf_large_cache_size": 16 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_raid_set_options", 00:25:09.525 "params": { 00:25:09.525 "process_window_size_kb": 1024, 00:25:09.525 "process_max_bandwidth_mb_sec": 0 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_iscsi_set_options", 00:25:09.525 "params": { 00:25:09.525 "timeout_sec": 30 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_nvme_set_options", 00:25:09.525 "params": { 00:25:09.525 "action_on_timeout": "none", 00:25:09.525 "timeout_us": 0, 00:25:09.525 "timeout_admin_us": 0, 00:25:09.525 "keep_alive_timeout_ms": 10000, 00:25:09.525 "arbitration_burst": 0, 00:25:09.525 "low_priority_weight": 0, 00:25:09.525 "medium_priority_weight": 0, 00:25:09.525 "high_priority_weight": 0, 00:25:09.525 "nvme_adminq_poll_period_us": 10000, 00:25:09.525 "nvme_ioq_poll_period_us": 0, 00:25:09.525 "io_queue_requests": 512, 00:25:09.525 "delay_cmd_submit": true, 00:25:09.525 "transport_retry_count": 4, 00:25:09.525 "bdev_retry_count": 3, 00:25:09.525 "transport_ack_timeout": 0, 00:25:09.525 "ctrlr_loss_timeout_sec": 0, 00:25:09.525 "reconnect_delay_sec": 0, 00:25:09.525 "fast_io_fail_timeout_sec": 0, 00:25:09.525 "disable_auto_failback": false, 00:25:09.525 "generate_uuids": false, 00:25:09.525 "transport_tos": 0, 00:25:09.525 "nvme_error_stat": false, 00:25:09.525 "rdma_srq_size": 0, 00:25:09.525 "io_path_stat": false, 00:25:09.525 "allow_accel_sequence": false, 00:25:09.525 "rdma_max_cq_size": 0, 00:25:09.525 "rdma_cm_event_timeout_ms": 0, 00:25:09.525 "dhchap_digests": [ 00:25:09.525 "sha256", 00:25:09.525 "sha384", 00:25:09.525 "sha512" 00:25:09.525 ], 00:25:09.525 "dhchap_dhgroups": [ 00:25:09.525 "null", 00:25:09.525 "ffdhe2048", 00:25:09.525 "ffdhe3072", 00:25:09.525 "ffdhe4096", 00:25:09.525 "ffdhe6144", 00:25:09.525 "ffdhe8192" 00:25:09.525 ] 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_nvme_attach_controller", 00:25:09.525 "params": { 00:25:09.525 "name": "nvme0", 00:25:09.525 "trtype": "TCP", 00:25:09.525 "adrfam": "IPv4", 00:25:09.525 "traddr": "10.0.0.2", 00:25:09.525 "trsvcid": "4420", 00:25:09.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.525 "prchk_reftag": false, 00:25:09.525 "prchk_guard": false, 00:25:09.525 "ctrlr_loss_timeout_sec": 0, 00:25:09.525 "reconnect_delay_sec": 0, 00:25:09.525 "fast_io_fail_timeout_sec": 0, 00:25:09.525 "psk": "key0", 00:25:09.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:09.525 "hdgst": false, 00:25:09.525 "ddgst": false, 00:25:09.525 "multipath": "multipath" 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_nvme_set_hotplug", 00:25:09.525 "params": { 00:25:09.525 "period_us": 100000, 00:25:09.525 "enable": false 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_enable_histogram", 00:25:09.525 "params": { 00:25:09.525 "name": "nvme0n1", 00:25:09.525 "enable": true 00:25:09.525 } 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "method": "bdev_wait_for_examine" 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }, 00:25:09.525 { 00:25:09.525 "subsystem": "nbd", 00:25:09.525 "config": [] 00:25:09.525 } 00:25:09.525 ] 00:25:09.525 }' 00:25:09.525 [2024-12-06 14:18:58.096824] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:09.525 [2024-12-06 14:18:58.096879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854368 ] 00:25:09.786 [2024-12-06 14:18:58.179679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.786 [2024-12-06 14:18:58.209476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.786 [2024-12-06 14:18:58.345545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.357 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.357 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:10.357 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.357 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:10.617 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.617 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.617 Running I/O for 1 seconds... 00:25:11.557 5012.00 IOPS, 19.58 MiB/s 00:25:11.557 Latency(us) 00:25:11.557 [2024-12-06T13:19:00.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.557 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:11.557 Verification LBA range: start 0x0 length 0x2000 00:25:11.557 nvme0n1 : 1.01 5072.12 19.81 0.00 0.00 25073.59 4505.60 109663.57 00:25:11.557 [2024-12-06T13:19:00.197Z] =================================================================================================================== 00:25:11.557 [2024-12-06T13:19:00.197Z] Total : 5072.12 19.81 0.00 0.00 25073.59 4505.60 109663.57 00:25:11.557 { 00:25:11.557 "results": [ 00:25:11.557 { 00:25:11.557 "job": "nvme0n1", 00:25:11.557 "core_mask": "0x2", 00:25:11.557 "workload": "verify", 00:25:11.557 "status": "finished", 00:25:11.557 "verify_range": { 00:25:11.557 "start": 0, 00:25:11.557 "length": 8192 00:25:11.557 }, 00:25:11.557 "queue_depth": 128, 00:25:11.558 "io_size": 4096, 00:25:11.558 "runtime": 1.01358, 00:25:11.558 "iops": 5072.1206022218275, 00:25:11.558 "mibps": 19.812971102429014, 00:25:11.558 "io_failed": 0, 00:25:11.558 "io_timeout": 0, 00:25:11.558 "avg_latency_us": 25073.591099007976, 00:25:11.558 "min_latency_us": 4505.6, 00:25:11.558 "max_latency_us": 109663.57333333333 00:25:11.558 } 00:25:11.558 ], 00:25:11.558 "core_count": 1 00:25:11.558 } 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:11.818 nvmf_trace.0 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2854368 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2854368 ']' 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2854368 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854368 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854368' 00:25:11.818 killing process with pid 2854368 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2854368 00:25:11.818 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.818 00:25:11.818 Latency(us) 00:25:11.818 [2024-12-06T13:19:00.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.818 [2024-12-06T13:19:00.458Z] =================================================================================================================== 00:25:11.818 [2024-12-06T13:19:00.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.818 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2854368 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:12.079 rmmod nvme_tcp 00:25:12.079 rmmod nvme_fabrics 00:25:12.079 rmmod nvme_keyring 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2854229 ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2854229 ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854229' 00:25:12.079 killing process with pid 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2854229 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.079 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.340 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.340 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.340 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.340 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.340 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.247 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.247 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1oC1httRft /tmp/tmp.bImA7pkuRb /tmp/tmp.y62da5yQbO 00:25:14.247 00:25:14.247 real 1m28.531s 00:25:14.247 user 2m20.481s 00:25:14.247 sys 0m26.975s 00:25:14.247 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.248 ************************************ 00:25:14.248 END TEST nvmf_tls 00:25:14.248 ************************************ 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:14.248 ************************************ 00:25:14.248 START TEST nvmf_fips 00:25:14.248 ************************************ 00:25:14.248 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:14.507 * Looking for test storage... 00:25:14.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:14.508 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:14.508 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:25:14.508 14:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.508 --rc genhtml_branch_coverage=1 00:25:14.508 --rc genhtml_function_coverage=1 00:25:14.508 --rc genhtml_legend=1 00:25:14.508 --rc geninfo_all_blocks=1 00:25:14.508 --rc geninfo_unexecuted_blocks=1 00:25:14.508 00:25:14.508 ' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.508 --rc genhtml_branch_coverage=1 00:25:14.508 --rc genhtml_function_coverage=1 00:25:14.508 --rc genhtml_legend=1 00:25:14.508 --rc geninfo_all_blocks=1 00:25:14.508 --rc geninfo_unexecuted_blocks=1 00:25:14.508 00:25:14.508 ' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.508 --rc genhtml_branch_coverage=1 00:25:14.508 --rc genhtml_function_coverage=1 00:25:14.508 --rc genhtml_legend=1 00:25:14.508 --rc geninfo_all_blocks=1 00:25:14.508 --rc geninfo_unexecuted_blocks=1 00:25:14.508 00:25:14.508 ' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.508 --rc genhtml_branch_coverage=1 00:25:14.508 --rc genhtml_function_coverage=1 00:25:14.508 --rc genhtml_legend=1 00:25:14.508 --rc geninfo_all_blocks=1 00:25:14.508 --rc geninfo_unexecuted_blocks=1 00:25:14.508 00:25:14.508 ' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.508 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:14.509 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:14.769 Error setting digest 00:25:14.769 40D264FA9F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:14.769 40D264FA9F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.769 14:19:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:22.905 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.905 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:22.906 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:22.906 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:22.906 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:25:22.906 00:25:22.906 --- 10.0.0.2 ping statistics --- 00:25:22.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.906 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:25:22.906 00:25:22.906 --- 10.0.0.1 ping statistics --- 00:25:22.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.906 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2859144 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2859144 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2859144 ']' 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.906 14:19:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.906 [2024-12-06 14:19:10.852088] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:22.906 [2024-12-06 14:19:10.852156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.906 [2024-12-06 14:19:10.953177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.906 [2024-12-06 14:19:11.003848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.906 [2024-12-06 14:19:11.003903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.906 [2024-12-06 14:19:11.003912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.906 [2024-12-06 14:19:11.003919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.906 [2024-12-06 14:19:11.003925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.906 [2024-12-06 14:19:11.004680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:23.167 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hsp 00:25:23.168 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:23.168 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hsp 00:25:23.168 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hsp 00:25:23.168 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hsp 00:25:23.168 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.430 [2024-12-06 14:19:11.870120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.430 [2024-12-06 14:19:11.886112] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:23.430 [2024-12-06 14:19:11.886420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.430 malloc0 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2859323 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2859323 /var/tmp/bdevperf.sock 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2859323 ']' 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.430 14:19:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:23.430 [2024-12-06 14:19:12.029237] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:23.430 [2024-12-06 14:19:12.029310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859323 ] 00:25:23.691 [2024-12-06 14:19:12.125885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.691 [2024-12-06 14:19:12.177369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.263 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.263 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:24.263 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hsp 00:25:24.536 14:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:24.536 [2024-12-06 14:19:13.161123] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:24.797 TLSTESTn1 00:25:24.797 14:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.797 Running I/O for 10 seconds... 00:25:27.120 3418.00 IOPS, 13.35 MiB/s [2024-12-06T13:19:16.697Z] 4263.00 IOPS, 16.65 MiB/s [2024-12-06T13:19:17.640Z] 4841.33 IOPS, 18.91 MiB/s [2024-12-06T13:19:18.583Z] 4709.75 IOPS, 18.40 MiB/s [2024-12-06T13:19:19.525Z] 4971.20 IOPS, 19.42 MiB/s [2024-12-06T13:19:20.470Z] 5075.17 IOPS, 19.82 MiB/s [2024-12-06T13:19:21.411Z] 5255.86 IOPS, 20.53 MiB/s [2024-12-06T13:19:22.798Z] 5307.75 IOPS, 20.73 MiB/s [2024-12-06T13:19:23.739Z] 5241.67 IOPS, 20.48 MiB/s [2024-12-06T13:19:23.739Z] 5221.20 IOPS, 20.40 MiB/s 00:25:35.099 Latency(us) 00:25:35.099 [2024-12-06T13:19:23.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.099 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.099 Verification LBA range: start 0x0 length 0x2000 00:25:35.099 TLSTESTn1 : 10.01 5226.88 20.42 0.00 0.00 24453.95 5679.79 37573.97 00:25:35.099 [2024-12-06T13:19:23.739Z] =================================================================================================================== 00:25:35.099 [2024-12-06T13:19:23.739Z] Total : 5226.88 20.42 0.00 0.00 24453.95 5679.79 37573.97 00:25:35.099 { 00:25:35.099 "results": [ 00:25:35.099 { 00:25:35.099 "job": "TLSTESTn1", 00:25:35.099 "core_mask": "0x4", 00:25:35.099 "workload": "verify", 00:25:35.099 "status": "finished", 00:25:35.099 "verify_range": { 00:25:35.099 "start": 0, 00:25:35.099 "length": 8192 00:25:35.099 }, 00:25:35.099 "queue_depth": 128, 00:25:35.099 "io_size": 4096, 00:25:35.099 "runtime": 10.013421, 00:25:35.099 "iops": 5226.884997644661, 00:25:35.099 "mibps": 20.417519522049457, 00:25:35.099 "io_failed": 0, 00:25:35.099 "io_timeout": 0, 00:25:35.099 "avg_latency_us": 24453.94850239146, 00:25:35.099 "min_latency_us": 5679.786666666667, 00:25:35.099 "max_latency_us": 37573.973333333335 00:25:35.099 } 00:25:35.099 ], 00:25:35.099 "core_count": 1 00:25:35.099 } 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:35.099 nvmf_trace.0 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2859323 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2859323 ']' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2859323 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859323 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859323' 00:25:35.099 killing process with pid 2859323 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2859323 00:25:35.099 Received shutdown signal, test time was about 10.000000 seconds 00:25:35.099 00:25:35.099 Latency(us) 00:25:35.099 [2024-12-06T13:19:23.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.099 [2024-12-06T13:19:23.739Z] =================================================================================================================== 00:25:35.099 [2024-12-06T13:19:23.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2859323 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.099 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.099 rmmod nvme_tcp 00:25:35.099 rmmod nvme_fabrics 00:25:35.099 rmmod nvme_keyring 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2859144 ']' 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2859144 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2859144 ']' 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2859144 00:25:35.100 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859144 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859144' 00:25:35.359 killing process with pid 2859144 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2859144 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2859144 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.359 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.360 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.360 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.360 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.360 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.360 14:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.902 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:37.902 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hsp 00:25:37.902 00:25:37.902 real 0m23.122s 00:25:37.902 user 0m24.748s 00:25:37.902 sys 0m9.641s 00:25:37.902 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.902 14:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:37.902 ************************************ 00:25:37.902 END TEST nvmf_fips 00:25:37.902 ************************************ 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:37.902 ************************************ 00:25:37.902 START TEST nvmf_control_msg_list 00:25:37.902 ************************************ 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:37.902 * Looking for test storage... 00:25:37.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.902 --rc genhtml_branch_coverage=1 00:25:37.902 --rc genhtml_function_coverage=1 00:25:37.902 --rc genhtml_legend=1 00:25:37.902 --rc geninfo_all_blocks=1 00:25:37.902 --rc geninfo_unexecuted_blocks=1 00:25:37.902 00:25:37.902 ' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.902 --rc genhtml_branch_coverage=1 00:25:37.902 --rc genhtml_function_coverage=1 00:25:37.902 --rc genhtml_legend=1 00:25:37.902 --rc geninfo_all_blocks=1 00:25:37.902 --rc geninfo_unexecuted_blocks=1 00:25:37.902 00:25:37.902 ' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.902 --rc genhtml_branch_coverage=1 00:25:37.902 --rc genhtml_function_coverage=1 00:25:37.902 --rc genhtml_legend=1 00:25:37.902 --rc geninfo_all_blocks=1 00:25:37.902 --rc geninfo_unexecuted_blocks=1 00:25:37.902 00:25:37.902 ' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.902 --rc genhtml_branch_coverage=1 00:25:37.902 --rc genhtml_function_coverage=1 00:25:37.902 --rc genhtml_legend=1 00:25:37.902 --rc geninfo_all_blocks=1 00:25:37.902 --rc geninfo_unexecuted_blocks=1 00:25:37.902 00:25:37.902 ' 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:37.902 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:37.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:37.903 14:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.065 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.065 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.065 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.065 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:46.065 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:25:46.066 00:25:46.066 --- 10.0.0.2 ping statistics --- 00:25:46.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.066 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:46.066 00:25:46.066 --- 10.0.0.1 ping statistics --- 00:25:46.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.066 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2865828 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2865828 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2865828 ']' 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.066 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.066 [2024-12-06 14:19:33.853336] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:46.066 [2024-12-06 14:19:33.853406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.066 [2024-12-06 14:19:33.952872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.066 [2024-12-06 14:19:34.004914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.066 [2024-12-06 14:19:34.004965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.066 [2024-12-06 14:19:34.004974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.066 [2024-12-06 14:19:34.004982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.066 [2024-12-06 14:19:34.004989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.066 [2024-12-06 14:19:34.005760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.066 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:46.066 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:46.066 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:46.066 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:46.066 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 [2024-12-06 14:19:34.709406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 Malloc0 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.328 [2024-12-06 14:19:34.763837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2866022 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2866023 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2866024 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2866022 00:25:46.328 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:46.328 [2024-12-06 14:19:34.874877] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:46.328 [2024-12-06 14:19:34.875242] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:46.328 [2024-12-06 14:19:34.875561] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:47.711 Initializing NVMe Controllers 00:25:47.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:47.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:47.711 Initialization complete. Launching workers. 00:25:47.711 ======================================================== 00:25:47.711 Latency(us) 00:25:47.711 Device Information : IOPS MiB/s Average min max 00:25:47.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1471.00 5.75 679.85 282.48 995.88 00:25:47.711 ======================================================== 00:25:47.711 Total : 1471.00 5.75 679.85 282.48 995.88 00:25:47.711 00:25:47.711 Initializing NVMe Controllers 00:25:47.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:47.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:47.711 Initialization complete. Launching workers. 00:25:47.711 ======================================================== 00:25:47.711 Latency(us) 00:25:47.711 Device Information : IOPS MiB/s Average min max 00:25:47.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2285.00 8.93 437.49 158.74 701.91 00:25:47.711 ======================================================== 00:25:47.711 Total : 2285.00 8.93 437.49 158.74 701.91 00:25:47.711 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2866023 00:25:47.711 Initializing NVMe Controllers 00:25:47.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:47.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:47.711 Initialization complete. Launching workers. 00:25:47.711 ======================================================== 00:25:47.711 Latency(us) 00:25:47.711 Device Information : IOPS MiB/s Average min max 00:25:47.711 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1668.00 6.52 599.29 166.00 921.44 00:25:47.711 ======================================================== 00:25:47.711 Total : 1668.00 6.52 599.29 166.00 921.44 00:25:47.711 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2866024 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.711 rmmod nvme_tcp 00:25:47.711 rmmod nvme_fabrics 00:25:47.711 rmmod nvme_keyring 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2865828 ']' 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2865828 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2865828 ']' 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2865828 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865828 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865828' 00:25:47.711 killing process with pid 2865828 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2865828 00:25:47.711 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2865828 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.971 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.880 00:25:49.880 real 0m12.392s 00:25:49.880 user 0m7.900s 00:25:49.880 sys 0m6.678s 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:49.880 ************************************ 00:25:49.880 END TEST nvmf_control_msg_list 00:25:49.880 ************************************ 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.880 14:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:50.145 ************************************ 00:25:50.145 START TEST nvmf_wait_for_buf 00:25:50.145 ************************************ 00:25:50.145 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:50.145 * Looking for test storage... 00:25:50.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:50.145 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.146 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.472 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:58.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:58.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:58.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.731 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:58.732 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.732 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:25:58.732 00:25:58.732 --- 10.0.0.2 ping statistics --- 00:25:58.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.732 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:58.732 00:25:58.732 --- 10.0.0.1 ping statistics --- 00:25:58.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.732 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2870423 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2870423 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2870423 ']' 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.732 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.732 [2024-12-06 14:19:46.331815] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:25:58.732 [2024-12-06 14:19:46.331885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.732 [2024-12-06 14:19:46.430968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.733 [2024-12-06 14:19:46.482008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.733 [2024-12-06 14:19:46.482058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.733 [2024-12-06 14:19:46.482068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.733 [2024-12-06 14:19:46.482075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.733 [2024-12-06 14:19:46.482082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.733 [2024-12-06 14:19:46.482902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 Malloc0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 [2024-12-06 14:19:47.300269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.733 [2024-12-06 14:19:47.336583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.733 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.994 [2024-12-06 14:19:47.438561] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:00.378 Initializing NVMe Controllers 00:26:00.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:00.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:00.378 Initialization complete. Launching workers. 00:26:00.378 ======================================================== 00:26:00.378 Latency(us) 00:26:00.378 Device Information : IOPS MiB/s Average min max 00:26:00.378 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.38 8018.25 63857.19 00:26:00.378 ======================================================== 00:26:00.378 Total : 129.00 16.12 32295.38 8018.25 63857.19 00:26:00.378 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.378 rmmod nvme_tcp 00:26:00.378 rmmod nvme_fabrics 00:26:00.378 rmmod nvme_keyring 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2870423 ']' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2870423 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2870423 ']' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2870423 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2870423 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2870423' 00:26:00.378 killing process with pid 2870423 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2870423 00:26:00.378 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2870423 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.638 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.180 00:26:03.180 real 0m12.654s 00:26:03.180 user 0m5.118s 00:26:03.180 sys 0m6.122s 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:03.180 ************************************ 00:26:03.180 END TEST nvmf_wait_for_buf 00:26:03.180 ************************************ 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:26:03.180 14:19:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:09.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:09.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:09.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:09.770 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:09.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.771 14:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:10.033 ************************************ 00:26:10.033 START TEST nvmf_perf_adq 00:26:10.033 ************************************ 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:10.033 * Looking for test storage... 00:26:10.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.033 --rc genhtml_branch_coverage=1 00:26:10.033 --rc genhtml_function_coverage=1 00:26:10.033 --rc genhtml_legend=1 00:26:10.033 --rc geninfo_all_blocks=1 00:26:10.033 --rc geninfo_unexecuted_blocks=1 00:26:10.033 00:26:10.033 ' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.033 --rc genhtml_branch_coverage=1 00:26:10.033 --rc genhtml_function_coverage=1 00:26:10.033 --rc genhtml_legend=1 00:26:10.033 --rc geninfo_all_blocks=1 00:26:10.033 --rc geninfo_unexecuted_blocks=1 00:26:10.033 00:26:10.033 ' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.033 --rc genhtml_branch_coverage=1 00:26:10.033 --rc genhtml_function_coverage=1 00:26:10.033 --rc genhtml_legend=1 00:26:10.033 --rc geninfo_all_blocks=1 00:26:10.033 --rc geninfo_unexecuted_blocks=1 00:26:10.033 00:26:10.033 ' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:10.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.033 --rc genhtml_branch_coverage=1 00:26:10.033 --rc genhtml_function_coverage=1 00:26:10.033 --rc genhtml_legend=1 00:26:10.033 --rc geninfo_all_blocks=1 00:26:10.033 --rc geninfo_unexecuted_blocks=1 00:26:10.033 00:26:10.033 ' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:10.033 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:10.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.034 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.294 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:10.294 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:10.294 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:18.436 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:18.436 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:18.436 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:18.437 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:18.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:18.437 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:19.009 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:20.922 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.213 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:26.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:26.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:26.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:26.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:26.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:26:26.214 00:26:26.214 --- 10.0.0.2 ping statistics --- 00:26:26.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.214 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:26.214 00:26:26.214 --- 10.0.0.1 ping statistics --- 00:26:26.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.214 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2880611 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2880611 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2880611 ']' 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.214 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:26.214 [2024-12-06 14:20:14.742677] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:26.214 [2024-12-06 14:20:14.742744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.214 [2024-12-06 14:20:14.844723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.475 [2024-12-06 14:20:14.898364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.475 [2024-12-06 14:20:14.898420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.475 [2024-12-06 14:20:14.898428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.475 [2024-12-06 14:20:14.898436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.475 [2024-12-06 14:20:14.898442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.475 [2024-12-06 14:20:14.900850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.475 [2024-12-06 14:20:14.901010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.475 [2024-12-06 14:20:14.901173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.475 [2024-12-06 14:20:14.901174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.046 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.307 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.308 [2024-12-06 14:20:15.759773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.308 Malloc1 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.308 [2024-12-06 14:20:15.836326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2880970 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:27.308 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:29.225 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:29.225 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.225 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:29.485 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.485 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:29.485 "tick_rate": 2400000000, 00:26:29.485 "poll_groups": [ 00:26:29.485 { 00:26:29.485 "name": "nvmf_tgt_poll_group_000", 00:26:29.485 "admin_qpairs": 1, 00:26:29.485 "io_qpairs": 1, 00:26:29.485 "current_admin_qpairs": 1, 00:26:29.485 "current_io_qpairs": 1, 00:26:29.485 "pending_bdev_io": 0, 00:26:29.485 "completed_nvme_io": 17787, 00:26:29.485 "transports": [ 00:26:29.485 { 00:26:29.485 "trtype": "TCP" 00:26:29.485 } 00:26:29.485 ] 00:26:29.485 }, 00:26:29.485 { 00:26:29.485 "name": "nvmf_tgt_poll_group_001", 00:26:29.485 "admin_qpairs": 0, 00:26:29.485 "io_qpairs": 1, 00:26:29.485 "current_admin_qpairs": 0, 00:26:29.485 "current_io_qpairs": 1, 00:26:29.485 "pending_bdev_io": 0, 00:26:29.485 "completed_nvme_io": 20420, 00:26:29.485 "transports": [ 00:26:29.485 { 00:26:29.485 "trtype": "TCP" 00:26:29.485 } 00:26:29.485 ] 00:26:29.485 }, 00:26:29.485 { 00:26:29.485 "name": "nvmf_tgt_poll_group_002", 00:26:29.485 "admin_qpairs": 0, 00:26:29.485 "io_qpairs": 1, 00:26:29.485 "current_admin_qpairs": 0, 00:26:29.486 "current_io_qpairs": 1, 00:26:29.486 "pending_bdev_io": 0, 00:26:29.486 "completed_nvme_io": 19986, 00:26:29.486 "transports": [ 00:26:29.486 { 00:26:29.486 "trtype": "TCP" 00:26:29.486 } 00:26:29.486 ] 00:26:29.486 }, 00:26:29.486 { 00:26:29.486 "name": "nvmf_tgt_poll_group_003", 00:26:29.486 "admin_qpairs": 0, 00:26:29.486 "io_qpairs": 1, 00:26:29.486 "current_admin_qpairs": 0, 00:26:29.486 "current_io_qpairs": 1, 00:26:29.486 "pending_bdev_io": 0, 00:26:29.486 "completed_nvme_io": 17014, 00:26:29.486 "transports": [ 00:26:29.486 { 00:26:29.486 "trtype": "TCP" 00:26:29.486 } 00:26:29.486 ] 00:26:29.486 } 00:26:29.486 ] 00:26:29.486 }' 00:26:29.486 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:29.486 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:29.486 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:29.486 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:29.486 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2880970 00:26:37.617 Initializing NVMe Controllers 00:26:37.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:37.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:37.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:37.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:37.617 Initialization complete. Launching workers. 00:26:37.617 ======================================================== 00:26:37.617 Latency(us) 00:26:37.617 Device Information : IOPS MiB/s Average min max 00:26:37.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12242.60 47.82 5241.86 1276.24 43813.68 00:26:37.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13286.20 51.90 4816.61 1105.30 12902.43 00:26:37.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13301.80 51.96 4811.13 1218.94 13686.97 00:26:37.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13200.40 51.56 4848.39 1651.56 13731.05 00:26:37.618 ======================================================== 00:26:37.618 Total : 52031.00 203.25 4923.33 1105.30 43813.68 00:26:37.618 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.618 rmmod nvme_tcp 00:26:37.618 rmmod nvme_fabrics 00:26:37.618 rmmod nvme_keyring 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2880611 ']' 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2880611 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2880611 ']' 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2880611 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880611 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880611' 00:26:37.618 killing process with pid 2880611 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2880611 00:26:37.618 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2880611 00:26:37.877 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:37.877 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:37.877 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:37.877 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:37.877 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.878 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.789 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:39.789 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:39.789 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:39.789 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:41.740 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:43.652 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.937 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.938 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.938 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:26:48.938 00:26:48.938 --- 10.0.0.2 ping statistics --- 00:26:48.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.938 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:26:48.938 00:26:48.938 --- 10.0.0.1 ping statistics --- 00:26:48.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.938 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.938 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:48.939 net.core.busy_poll = 1 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:48.939 net.core.busy_read = 1 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2885438 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2885438 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2885438 ']' 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.939 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.939 [2024-12-06 14:20:37.464331] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:26:48.939 [2024-12-06 14:20:37.464407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.939 [2024-12-06 14:20:37.564057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.199 [2024-12-06 14:20:37.616582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.199 [2024-12-06 14:20:37.616634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.199 [2024-12-06 14:20:37.616642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.199 [2024-12-06 14:20:37.616650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.199 [2024-12-06 14:20:37.616656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.199 [2024-12-06 14:20:37.618657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.199 [2024-12-06 14:20:37.618885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.199 [2024-12-06 14:20:37.619045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.199 [2024-12-06 14:20:37.619047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.769 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.770 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 [2024-12-06 14:20:38.485212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 Malloc1 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.030 [2024-12-06 14:20:38.560312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2885700 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:50.030 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:51.938 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:51.938 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.938 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.197 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.197 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:52.197 "tick_rate": 2400000000, 00:26:52.197 "poll_groups": [ 00:26:52.197 { 00:26:52.197 "name": "nvmf_tgt_poll_group_000", 00:26:52.197 "admin_qpairs": 1, 00:26:52.197 "io_qpairs": 1, 00:26:52.197 "current_admin_qpairs": 1, 00:26:52.197 "current_io_qpairs": 1, 00:26:52.197 "pending_bdev_io": 0, 00:26:52.197 "completed_nvme_io": 27473, 00:26:52.197 "transports": [ 00:26:52.197 { 00:26:52.197 "trtype": "TCP" 00:26:52.197 } 00:26:52.197 ] 00:26:52.197 }, 00:26:52.197 { 00:26:52.197 "name": "nvmf_tgt_poll_group_001", 00:26:52.197 "admin_qpairs": 0, 00:26:52.197 "io_qpairs": 3, 00:26:52.197 "current_admin_qpairs": 0, 00:26:52.197 "current_io_qpairs": 3, 00:26:52.197 "pending_bdev_io": 0, 00:26:52.197 "completed_nvme_io": 28005, 00:26:52.197 "transports": [ 00:26:52.197 { 00:26:52.197 "trtype": "TCP" 00:26:52.197 } 00:26:52.197 ] 00:26:52.197 }, 00:26:52.197 { 00:26:52.197 "name": "nvmf_tgt_poll_group_002", 00:26:52.197 "admin_qpairs": 0, 00:26:52.197 "io_qpairs": 0, 00:26:52.197 "current_admin_qpairs": 0, 00:26:52.197 "current_io_qpairs": 0, 00:26:52.197 "pending_bdev_io": 0, 00:26:52.197 "completed_nvme_io": 0, 00:26:52.197 "transports": [ 00:26:52.197 { 00:26:52.197 "trtype": "TCP" 00:26:52.197 } 00:26:52.197 ] 00:26:52.197 }, 00:26:52.197 { 00:26:52.197 "name": "nvmf_tgt_poll_group_003", 00:26:52.197 "admin_qpairs": 0, 00:26:52.197 "io_qpairs": 0, 00:26:52.198 "current_admin_qpairs": 0, 00:26:52.198 "current_io_qpairs": 0, 00:26:52.198 "pending_bdev_io": 0, 00:26:52.198 "completed_nvme_io": 0, 00:26:52.198 "transports": [ 00:26:52.198 { 00:26:52.198 "trtype": "TCP" 00:26:52.198 } 00:26:52.198 ] 00:26:52.198 } 00:26:52.198 ] 00:26:52.198 }' 00:26:52.198 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:52.198 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:52.198 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:52.198 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:52.198 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2885700 00:27:00.353 Initializing NVMe Controllers 00:27:00.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:00.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:00.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:00.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:00.353 Initialization complete. Launching workers. 00:27:00.353 ======================================================== 00:27:00.353 Latency(us) 00:27:00.353 Device Information : IOPS MiB/s Average min max 00:27:00.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5842.20 22.82 10956.37 1345.67 57521.05 00:27:00.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6554.30 25.60 9764.80 817.93 59003.44 00:27:00.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 19745.60 77.13 3240.89 1921.20 8532.47 00:27:00.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6634.90 25.92 9677.50 1345.32 62960.19 00:27:00.354 ======================================================== 00:27:00.354 Total : 38777.00 151.47 6607.35 817.93 62960.19 00:27:00.354 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.354 rmmod nvme_tcp 00:27:00.354 rmmod nvme_fabrics 00:27:00.354 rmmod nvme_keyring 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2885438 ']' 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2885438 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2885438 ']' 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2885438 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2885438 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2885438' 00:27:00.354 killing process with pid 2885438 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2885438 00:27:00.354 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2885438 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.615 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.920 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.920 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:03.920 00:27:03.920 real 0m53.679s 00:27:03.920 user 2m50.033s 00:27:03.921 sys 0m11.319s 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:03.921 ************************************ 00:27:03.921 END TEST nvmf_perf_adq 00:27:03.921 ************************************ 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:03.921 ************************************ 00:27:03.921 START TEST nvmf_shutdown 00:27:03.921 ************************************ 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:03.921 * Looking for test storage... 00:27:03.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.921 --rc genhtml_branch_coverage=1 00:27:03.921 --rc genhtml_function_coverage=1 00:27:03.921 --rc genhtml_legend=1 00:27:03.921 --rc geninfo_all_blocks=1 00:27:03.921 --rc geninfo_unexecuted_blocks=1 00:27:03.921 00:27:03.921 ' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.921 --rc genhtml_branch_coverage=1 00:27:03.921 --rc genhtml_function_coverage=1 00:27:03.921 --rc genhtml_legend=1 00:27:03.921 --rc geninfo_all_blocks=1 00:27:03.921 --rc geninfo_unexecuted_blocks=1 00:27:03.921 00:27:03.921 ' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.921 --rc genhtml_branch_coverage=1 00:27:03.921 --rc genhtml_function_coverage=1 00:27:03.921 --rc genhtml_legend=1 00:27:03.921 --rc geninfo_all_blocks=1 00:27:03.921 --rc geninfo_unexecuted_blocks=1 00:27:03.921 00:27:03.921 ' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.921 --rc genhtml_branch_coverage=1 00:27:03.921 --rc genhtml_function_coverage=1 00:27:03.921 --rc genhtml_legend=1 00:27:03.921 --rc geninfo_all_blocks=1 00:27:03.921 --rc geninfo_unexecuted_blocks=1 00:27:03.921 00:27:03.921 ' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.921 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:03.922 ************************************ 00:27:03.922 START TEST nvmf_shutdown_tc1 00:27:03.922 ************************************ 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.922 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:12.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:12.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:12.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.062 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:12.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.063 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:27:12.063 00:27:12.063 --- 10.0.0.2 ping statistics --- 00:27:12.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.063 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:27:12.063 00:27:12.063 --- 10.0.0.1 ping statistics --- 00:27:12.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.063 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2892278 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2892278 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2892278 ']' 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.063 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.063 [2024-12-06 14:21:00.196178] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:12.063 [2024-12-06 14:21:00.196247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.063 [2024-12-06 14:21:00.296993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.063 [2024-12-06 14:21:00.350074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.063 [2024-12-06 14:21:00.350127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.063 [2024-12-06 14:21:00.350137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.063 [2024-12-06 14:21:00.350145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.063 [2024-12-06 14:21:00.350151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.063 [2024-12-06 14:21:00.352183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.063 [2024-12-06 14:21:00.352347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.063 [2024-12-06 14:21:00.352511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:12.063 [2024-12-06 14:21:00.352512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.636 [2024-12-06 14:21:01.075479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.636 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.636 Malloc1 00:27:12.636 [2024-12-06 14:21:01.203149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.636 Malloc2 00:27:12.897 Malloc3 00:27:12.897 Malloc4 00:27:12.897 Malloc5 00:27:12.897 Malloc6 00:27:12.897 Malloc7 00:27:12.897 Malloc8 00:27:13.158 Malloc9 00:27:13.158 Malloc10 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2892629 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2892629 /var/tmp/bdevperf.sock 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2892629 ']' 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.158 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 [2024-12-06 14:21:01.724552] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:13.159 [2024-12-06 14:21:01.724631] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:13.159 { 00:27:13.159 "params": { 00:27:13.159 "name": "Nvme$subsystem", 00:27:13.159 "trtype": "$TEST_TRANSPORT", 00:27:13.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.159 "adrfam": "ipv4", 00:27:13.159 "trsvcid": "$NVMF_PORT", 00:27:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.159 "hdgst": ${hdgst:-false}, 00:27:13.159 "ddgst": ${ddgst:-false} 00:27:13.159 }, 00:27:13.159 "method": "bdev_nvme_attach_controller" 00:27:13.159 } 00:27:13.159 EOF 00:27:13.159 )") 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:13.159 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:13.160 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:13.160 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme1", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme2", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme3", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme4", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme5", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme6", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme7", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme8", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme9", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 },{ 00:27:13.160 "params": { 00:27:13.160 "name": "Nvme10", 00:27:13.160 "trtype": "tcp", 00:27:13.160 "traddr": "10.0.0.2", 00:27:13.160 "adrfam": "ipv4", 00:27:13.160 "trsvcid": "4420", 00:27:13.160 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:13.160 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:13.160 "hdgst": false, 00:27:13.160 "ddgst": false 00:27:13.160 }, 00:27:13.160 "method": "bdev_nvme_attach_controller" 00:27:13.160 }' 00:27:13.421 [2024-12-06 14:21:01.821147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.421 [2024-12-06 14:21:01.876164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2892629 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:14.827 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:15.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2892629 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2892278 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.770 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.770 { 00:27:15.770 "params": { 00:27:15.770 "name": "Nvme$subsystem", 00:27:15.770 "trtype": "$TEST_TRANSPORT", 00:27:15.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.770 "adrfam": "ipv4", 00:27:15.770 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 [2024-12-06 14:21:04.256985] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:15.771 [2024-12-06 14:21:04.257041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893120 ] 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.771 { 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme$subsystem", 00:27:15.771 "trtype": "$TEST_TRANSPORT", 00:27:15.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "$NVMF_PORT", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.771 "hdgst": ${hdgst:-false}, 00:27:15.771 "ddgst": ${ddgst:-false} 00:27:15.771 }, 00:27:15.771 "method": "bdev_nvme_attach_controller" 00:27:15.771 } 00:27:15.771 EOF 00:27:15.771 )") 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:15.771 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:15.771 "params": { 00:27:15.771 "name": "Nvme1", 00:27:15.771 "trtype": "tcp", 00:27:15.771 "traddr": "10.0.0.2", 00:27:15.771 "adrfam": "ipv4", 00:27:15.771 "trsvcid": "4420", 00:27:15.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:15.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:15.771 "hdgst": false, 00:27:15.771 "ddgst": false 00:27:15.771 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme2", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme3", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme4", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme5", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme6", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme7", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme8", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme9", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 },{ 00:27:15.772 "params": { 00:27:15.772 "name": "Nvme10", 00:27:15.772 "trtype": "tcp", 00:27:15.772 "traddr": "10.0.0.2", 00:27:15.772 "adrfam": "ipv4", 00:27:15.772 "trsvcid": "4420", 00:27:15.772 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:15.772 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:15.772 "hdgst": false, 00:27:15.772 "ddgst": false 00:27:15.772 }, 00:27:15.772 "method": "bdev_nvme_attach_controller" 00:27:15.772 }' 00:27:15.772 [2024-12-06 14:21:04.348429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.772 [2024-12-06 14:21:04.384476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.161 Running I/O for 1 seconds... 00:27:18.363 1858.00 IOPS, 116.12 MiB/s 00:27:18.363 Latency(us) 00:27:18.363 [2024-12-06T13:21:07.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.363 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme1n1 : 1.11 229.81 14.36 0.00 0.00 275440.00 20097.71 246415.36 00:27:18.363 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme2n1 : 1.12 231.34 14.46 0.00 0.00 267493.38 7099.73 244667.73 00:27:18.363 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme3n1 : 1.11 231.31 14.46 0.00 0.00 264194.77 19442.35 235929.60 00:27:18.363 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme4n1 : 1.10 231.92 14.49 0.00 0.00 258482.13 22391.47 241172.48 00:27:18.363 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme5n1 : 1.12 228.20 14.26 0.00 0.00 258145.28 28835.84 249910.61 00:27:18.363 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme6n1 : 1.18 270.30 16.89 0.00 0.00 214685.53 15619.41 258648.75 00:27:18.363 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme7n1 : 1.13 227.32 14.21 0.00 0.00 249625.39 14636.37 248162.99 00:27:18.363 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme8n1 : 1.19 322.47 20.15 0.00 0.00 173878.93 5980.16 262144.00 00:27:18.363 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme9n1 : 1.20 267.27 16.70 0.00 0.00 206013.87 12014.93 267386.88 00:27:18.363 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:18.363 Verification LBA range: start 0x0 length 0x400 00:27:18.363 Nvme10n1 : 1.18 217.10 13.57 0.00 0.00 248398.72 21189.97 246415.36 00:27:18.363 [2024-12-06T13:21:07.003Z] =================================================================================================================== 00:27:18.363 [2024-12-06T13:21:07.003Z] Total : 2457.02 153.56 0.00 0.00 237176.91 5980.16 267386.88 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:18.624 rmmod nvme_tcp 00:27:18.624 rmmod nvme_fabrics 00:27:18.624 rmmod nvme_keyring 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2892278 ']' 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2892278 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2892278 ']' 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2892278 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892278 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892278' 00:27:18.624 killing process with pid 2892278 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2892278 00:27:18.624 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2892278 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.885 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:20.963 00:27:20.963 real 0m17.008s 00:27:20.963 user 0m34.393s 00:27:20.963 sys 0m6.944s 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:20.963 ************************************ 00:27:20.963 END TEST nvmf_shutdown_tc1 00:27:20.963 ************************************ 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.963 ************************************ 00:27:20.963 START TEST nvmf_shutdown_tc2 00:27:20.963 ************************************ 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.963 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.224 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:21.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:21.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:21.225 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:21.225 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.225 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:27:21.486 00:27:21.486 --- 10.0.0.2 ping statistics --- 00:27:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.486 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:27:21.486 00:27:21.486 --- 10.0.0.1 ping statistics --- 00:27:21.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.486 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2894539 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2894539 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2894539 ']' 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.486 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.487 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.487 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.487 [2024-12-06 14:21:10.007192] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:21.487 [2024-12-06 14:21:10.007260] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.487 [2024-12-06 14:21:10.108249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.748 [2024-12-06 14:21:10.150368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.748 [2024-12-06 14:21:10.150403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.749 [2024-12-06 14:21:10.150409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.749 [2024-12-06 14:21:10.150414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.749 [2024-12-06 14:21:10.150419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.749 [2024-12-06 14:21:10.152211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:21.749 [2024-12-06 14:21:10.152366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.749 [2024-12-06 14:21:10.152527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.749 [2024-12-06 14:21:10.152717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.320 [2024-12-06 14:21:10.854348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.320 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.320 Malloc1 00:27:22.580 [2024-12-06 14:21:10.964213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.580 Malloc2 00:27:22.580 Malloc3 00:27:22.580 Malloc4 00:27:22.580 Malloc5 00:27:22.580 Malloc6 00:27:22.580 Malloc7 00:27:22.842 Malloc8 00:27:22.842 Malloc9 00:27:22.842 Malloc10 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2895074 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2895074 /var/tmp/bdevperf.sock 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2895074 ']' 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.842 { 00:27:22.842 "params": { 00:27:22.842 "name": "Nvme$subsystem", 00:27:22.842 "trtype": "$TEST_TRANSPORT", 00:27:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.842 "adrfam": "ipv4", 00:27:22.842 "trsvcid": "$NVMF_PORT", 00:27:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.842 "hdgst": ${hdgst:-false}, 00:27:22.842 "ddgst": ${ddgst:-false} 00:27:22.842 }, 00:27:22.842 "method": "bdev_nvme_attach_controller" 00:27:22.842 } 00:27:22.842 EOF 00:27:22.842 )") 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.842 { 00:27:22.842 "params": { 00:27:22.842 "name": "Nvme$subsystem", 00:27:22.842 "trtype": "$TEST_TRANSPORT", 00:27:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.842 "adrfam": "ipv4", 00:27:22.842 "trsvcid": "$NVMF_PORT", 00:27:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.842 "hdgst": ${hdgst:-false}, 00:27:22.842 "ddgst": ${ddgst:-false} 00:27:22.842 }, 00:27:22.842 "method": "bdev_nvme_attach_controller" 00:27:22.842 } 00:27:22.842 EOF 00:27:22.842 )") 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.842 { 00:27:22.842 "params": { 00:27:22.842 "name": "Nvme$subsystem", 00:27:22.842 "trtype": "$TEST_TRANSPORT", 00:27:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.842 "adrfam": "ipv4", 00:27:22.842 "trsvcid": "$NVMF_PORT", 00:27:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.842 "hdgst": ${hdgst:-false}, 00:27:22.842 "ddgst": ${ddgst:-false} 00:27:22.842 }, 00:27:22.842 "method": "bdev_nvme_attach_controller" 00:27:22.842 } 00:27:22.842 EOF 00:27:22.842 )") 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.842 { 00:27:22.842 "params": { 00:27:22.842 "name": "Nvme$subsystem", 00:27:22.842 "trtype": "$TEST_TRANSPORT", 00:27:22.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.842 "adrfam": "ipv4", 00:27:22.842 "trsvcid": "$NVMF_PORT", 00:27:22.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.842 "hdgst": ${hdgst:-false}, 00:27:22.842 "ddgst": ${ddgst:-false} 00:27:22.842 }, 00:27:22.842 "method": "bdev_nvme_attach_controller" 00:27:22.842 } 00:27:22.842 EOF 00:27:22.842 )") 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.842 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 [2024-12-06 14:21:11.408005] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:22.843 [2024-12-06 14:21:11.408060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895074 ] 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:22.843 { 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme$subsystem", 00:27:22.843 "trtype": "$TEST_TRANSPORT", 00:27:22.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "$NVMF_PORT", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.843 "hdgst": ${hdgst:-false}, 00:27:22.843 "ddgst": ${ddgst:-false} 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 } 00:27:22.843 EOF 00:27:22.843 )") 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:22.843 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme1", 00:27:22.843 "trtype": "tcp", 00:27:22.843 "traddr": "10.0.0.2", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "4420", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.843 "hdgst": false, 00:27:22.843 "ddgst": false 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 },{ 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme2", 00:27:22.843 "trtype": "tcp", 00:27:22.843 "traddr": "10.0.0.2", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "4420", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:22.843 "hdgst": false, 00:27:22.843 "ddgst": false 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 },{ 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme3", 00:27:22.843 "trtype": "tcp", 00:27:22.843 "traddr": "10.0.0.2", 00:27:22.843 "adrfam": "ipv4", 00:27:22.843 "trsvcid": "4420", 00:27:22.843 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:22.843 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:22.843 "hdgst": false, 00:27:22.843 "ddgst": false 00:27:22.843 }, 00:27:22.843 "method": "bdev_nvme_attach_controller" 00:27:22.843 },{ 00:27:22.843 "params": { 00:27:22.843 "name": "Nvme4", 00:27:22.843 "trtype": "tcp", 00:27:22.843 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme5", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme6", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme7", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme8", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme9", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 },{ 00:27:22.844 "params": { 00:27:22.844 "name": "Nvme10", 00:27:22.844 "trtype": "tcp", 00:27:22.844 "traddr": "10.0.0.2", 00:27:22.844 "adrfam": "ipv4", 00:27:22.844 "trsvcid": "4420", 00:27:22.844 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:22.844 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:22.844 "hdgst": false, 00:27:22.844 "ddgst": false 00:27:22.844 }, 00:27:22.844 "method": "bdev_nvme_attach_controller" 00:27:22.844 }' 00:27:23.104 [2024-12-06 14:21:11.498634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.104 [2024-12-06 14:21:11.535060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.488 Running I/O for 10 seconds... 00:27:24.488 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.488 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:24.488 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:24.488 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.488 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:24.767 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.027 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.028 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:25.028 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:25.028 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:25.288 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2895074 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2895074 ']' 00:27:25.289 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2895074 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2895074 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2895074' 00:27:25.550 killing process with pid 2895074 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2895074 00:27:25.550 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2895074 00:27:25.550 2187.00 IOPS, 136.69 MiB/s [2024-12-06T13:21:14.190Z] Received shutdown signal, test time was about 1.027490 seconds 00:27:25.550 00:27:25.550 Latency(us) 00:27:25.550 [2024-12-06T13:21:14.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.550 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.550 Verification LBA range: start 0x0 length 0x400 00:27:25.550 Nvme1n1 : 0.97 197.18 12.32 0.00 0.00 320833.99 21736.11 253405.87 00:27:25.550 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.550 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme2n1 : 0.98 196.07 12.25 0.00 0.00 315419.02 21299.20 276125.01 00:27:25.551 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme3n1 : 0.98 266.14 16.63 0.00 0.00 227523.32 3194.88 256901.12 00:27:25.551 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme4n1 : 1.00 256.44 16.03 0.00 0.00 231478.61 21517.65 248162.99 00:27:25.551 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme5n1 : 1.03 249.37 15.59 0.00 0.00 234193.92 17694.72 251658.24 00:27:25.551 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme6n1 : 1.02 250.05 15.63 0.00 0.00 228680.11 16930.13 288358.40 00:27:25.551 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme7n1 : 1.00 257.01 16.06 0.00 0.00 216598.83 20753.07 253405.87 00:27:25.551 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme8n1 : 1.02 250.72 15.67 0.00 0.00 217852.37 18240.85 212336.64 00:27:25.551 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme9n1 : 1.02 256.40 16.03 0.00 0.00 204175.40 15400.96 246415.36 00:27:25.551 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.551 Verification LBA range: start 0x0 length 0x400 00:27:25.551 Nvme10n1 : 0.99 194.28 12.14 0.00 0.00 267348.76 28180.48 267386.88 00:27:25.551 [2024-12-06T13:21:14.191Z] =================================================================================================================== 00:27:25.551 [2024-12-06T13:21:14.191Z] Total : 2373.66 148.35 0.00 0.00 241852.13 3194.88 288358.40 00:27:25.811 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.791 rmmod nvme_tcp 00:27:26.791 rmmod nvme_fabrics 00:27:26.791 rmmod nvme_keyring 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2894539 ']' 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2894539 ']' 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894539' 00:27:26.791 killing process with pid 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2894539 00:27:26.791 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2894539 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.051 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.595 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:29.595 00:27:29.595 real 0m8.096s 00:27:29.595 user 0m24.801s 00:27:29.595 sys 0m1.318s 00:27:29.595 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.595 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.596 ************************************ 00:27:29.596 END TEST nvmf_shutdown_tc2 00:27:29.596 ************************************ 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.596 ************************************ 00:27:29.596 START TEST nvmf_shutdown_tc3 00:27:29.596 ************************************ 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:29.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:29.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:29.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:29.596 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:29.597 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:29.597 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:29.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:29.597 00:27:29.597 --- 10.0.0.2 ping statistics --- 00:27:29.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.597 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:27:29.597 00:27:29.597 --- 10.0.0.1 ping statistics --- 00:27:29.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.597 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2896544 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2896544 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2896544 ']' 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.597 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.597 [2024-12-06 14:21:18.172864] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:29.597 [2024-12-06 14:21:18.172912] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.858 [2024-12-06 14:21:18.262552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.858 [2024-12-06 14:21:18.295677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.858 [2024-12-06 14:21:18.295705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.858 [2024-12-06 14:21:18.295712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.858 [2024-12-06 14:21:18.295717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.858 [2024-12-06 14:21:18.295721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.858 [2024-12-06 14:21:18.297005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.858 [2024-12-06 14:21:18.297158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.858 [2024-12-06 14:21:18.297306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.858 [2024-12-06 14:21:18.297308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.430 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.430 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:30.430 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.430 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.430 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.430 [2024-12-06 14:21:19.015867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.430 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.431 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.692 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.692 Malloc1 00:27:30.692 [2024-12-06 14:21:19.125327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.692 Malloc2 00:27:30.692 Malloc3 00:27:30.692 Malloc4 00:27:30.692 Malloc5 00:27:30.692 Malloc6 00:27:30.954 Malloc7 00:27:30.954 Malloc8 00:27:30.954 Malloc9 00:27:30.954 Malloc10 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2896908 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2896908 /var/tmp/bdevperf.sock 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2896908 ']' 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.954 { 00:27:30.954 "params": { 00:27:30.954 "name": "Nvme$subsystem", 00:27:30.954 "trtype": "$TEST_TRANSPORT", 00:27:30.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.954 "adrfam": "ipv4", 00:27:30.954 "trsvcid": "$NVMF_PORT", 00:27:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.954 "hdgst": ${hdgst:-false}, 00:27:30.954 "ddgst": ${ddgst:-false} 00:27:30.954 }, 00:27:30.954 "method": "bdev_nvme_attach_controller" 00:27:30.954 } 00:27:30.954 EOF 00:27:30.954 )") 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.954 [2024-12-06 14:21:19.573062] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:30.954 [2024-12-06 14:21:19.573118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896908 ] 00:27:30.954 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.955 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.955 { 00:27:30.955 "params": { 00:27:30.955 "name": "Nvme$subsystem", 00:27:30.955 "trtype": "$TEST_TRANSPORT", 00:27:30.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.955 "adrfam": "ipv4", 00:27:30.955 "trsvcid": "$NVMF_PORT", 00:27:30.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.955 "hdgst": ${hdgst:-false}, 00:27:30.955 "ddgst": ${ddgst:-false} 00:27:30.955 }, 00:27:30.955 "method": "bdev_nvme_attach_controller" 00:27:30.955 } 00:27:30.955 EOF 00:27:30.955 )") 00:27:30.955 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:30.955 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.955 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.955 { 00:27:30.955 "params": { 00:27:30.955 "name": "Nvme$subsystem", 00:27:30.955 "trtype": "$TEST_TRANSPORT", 00:27:30.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.955 "adrfam": "ipv4", 00:27:30.955 "trsvcid": "$NVMF_PORT", 00:27:30.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.955 "hdgst": ${hdgst:-false}, 00:27:30.955 "ddgst": ${ddgst:-false} 00:27:30.955 }, 00:27:30.955 "method": "bdev_nvme_attach_controller" 00:27:30.955 } 00:27:30.955 EOF 00:27:30.955 )") 00:27:30.955 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:31.216 { 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme$subsystem", 00:27:31.216 "trtype": "$TEST_TRANSPORT", 00:27:31.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "$NVMF_PORT", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.216 "hdgst": ${hdgst:-false}, 00:27:31.216 "ddgst": ${ddgst:-false} 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 } 00:27:31.216 EOF 00:27:31.216 )") 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:31.216 { 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme$subsystem", 00:27:31.216 "trtype": "$TEST_TRANSPORT", 00:27:31.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "$NVMF_PORT", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.216 "hdgst": ${hdgst:-false}, 00:27:31.216 "ddgst": ${ddgst:-false} 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 } 00:27:31.216 EOF 00:27:31.216 )") 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:31.216 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme1", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme2", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme3", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme4", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme5", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme6", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:31.216 "hdgst": false, 00:27:31.216 "ddgst": false 00:27:31.216 }, 00:27:31.216 "method": "bdev_nvme_attach_controller" 00:27:31.216 },{ 00:27:31.216 "params": { 00:27:31.216 "name": "Nvme7", 00:27:31.216 "trtype": "tcp", 00:27:31.216 "traddr": "10.0.0.2", 00:27:31.216 "adrfam": "ipv4", 00:27:31.216 "trsvcid": "4420", 00:27:31.216 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:31.216 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:31.216 "hdgst": false, 00:27:31.217 "ddgst": false 00:27:31.217 }, 00:27:31.217 "method": "bdev_nvme_attach_controller" 00:27:31.217 },{ 00:27:31.217 "params": { 00:27:31.217 "name": "Nvme8", 00:27:31.217 "trtype": "tcp", 00:27:31.217 "traddr": "10.0.0.2", 00:27:31.217 "adrfam": "ipv4", 00:27:31.217 "trsvcid": "4420", 00:27:31.217 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:31.217 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:31.217 "hdgst": false, 00:27:31.217 "ddgst": false 00:27:31.217 }, 00:27:31.217 "method": "bdev_nvme_attach_controller" 00:27:31.217 },{ 00:27:31.217 "params": { 00:27:31.217 "name": "Nvme9", 00:27:31.217 "trtype": "tcp", 00:27:31.217 "traddr": "10.0.0.2", 00:27:31.217 "adrfam": "ipv4", 00:27:31.217 "trsvcid": "4420", 00:27:31.217 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:31.217 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:31.217 "hdgst": false, 00:27:31.217 "ddgst": false 00:27:31.217 }, 00:27:31.217 "method": "bdev_nvme_attach_controller" 00:27:31.217 },{ 00:27:31.217 "params": { 00:27:31.217 "name": "Nvme10", 00:27:31.217 "trtype": "tcp", 00:27:31.217 "traddr": "10.0.0.2", 00:27:31.217 "adrfam": "ipv4", 00:27:31.217 "trsvcid": "4420", 00:27:31.217 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:31.217 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:31.217 "hdgst": false, 00:27:31.217 "ddgst": false 00:27:31.217 }, 00:27:31.217 "method": "bdev_nvme_attach_controller" 00:27:31.217 }' 00:27:31.217 [2024-12-06 14:21:19.663599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.217 [2024-12-06 14:21:19.699980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.141 Running I/O for 10 seconds... 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2896544 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2896544 ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2896544 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896544 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896544' 00:27:33.727 killing process with pid 2896544 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2896544 00:27:33.727 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2896544 00:27:33.727 [2024-12-06 14:21:22.217052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.727 [2024-12-06 14:21:22.217269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.217426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318de0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.728 [2024-12-06 14:21:22.218749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.218837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23192b0 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.729 [2024-12-06 14:21:22.220691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.220695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a140 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.221615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a610 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.730 [2024-12-06 14:21:22.222898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.222995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.223088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347260 is same with the state(6) to be set 00:27:33.731 [2024-12-06 14:21:22.225419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.731 [2024-12-06 14:21:22.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.731 [2024-12-06 14:21:22.225684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.225988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.732 [2024-12-06 14:21:22.226208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.732 [2024-12-06 14:21:22.226218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.733 [2024-12-06 14:21:22.226550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.226579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.733 [2024-12-06 14:21:22.227138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c20 is same with the state(6) to be set 00:27:33.733 [2024-12-06 14:21:22.227241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3d0 is same with the state(6) to be set 00:27:33.733 [2024-12-06 14:21:22.227333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e00 is same with the state(6) to be set 00:27:33.733 [2024-12-06 14:21:22.227423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.733 [2024-12-06 14:21:22.227492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c524b0 is same with the state(6) to be set 00:27:33.733 [2024-12-06 14:21:22.227518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.733 [2024-12-06 14:21:22.227527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20738a0 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.227603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac1e0 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.227689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207cbe0 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.227777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a610 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.227859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52920 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.227942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.227992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.734 [2024-12-06 14:21:22.227999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ea00 is same with the state(6) to be set 00:27:33.734 [2024-12-06 14:21:22.228662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.734 [2024-12-06 14:21:22.228813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.734 [2024-12-06 14:21:22.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.228991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.228998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.229175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.229184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.735 [2024-12-06 14:21:22.240861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.735 [2024-12-06 14:21:22.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.240985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.240995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.736 [2024-12-06 14:21:22.241292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.736 [2024-12-06 14:21:22.241519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.736 [2024-12-06 14:21:22.241529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.241985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.737 [2024-12-06 14:21:22.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.737 [2024-12-06 14:21:22.242197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.242427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.242435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20536a0 is same with the state(6) to be set 00:27:33.738 [2024-12-06 14:21:22.243989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.738 [2024-12-06 14:21:22.244326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.738 [2024-12-06 14:21:22.244336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.244990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.739 [2024-12-06 14:21:22.244998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.739 [2024-12-06 14:21:22.245007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.245099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.245202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50c20 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac3d0 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e00 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c524b0 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20738a0 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac1e0 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207cbe0 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a610 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c52920 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.245345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ea00 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.249144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:33.740 [2024-12-06 14:21:22.251231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:33.740 [2024-12-06 14:21:22.251265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:33.740 [2024-12-06 14:21:22.251776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.740 [2024-12-06 14:21:22.251817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a610 with addr=10.0.0.2, port=4420 00:27:33.740 [2024-12-06 14:21:22.251829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a610 is same with the state(6) to be set 00:27:33.740 [2024-12-06 14:21:22.252529] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.740 [2024-12-06 14:21:22.252581] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.740 [2024-12-06 14:21:22.252712] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.740 [2024-12-06 14:21:22.252991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:33.740 [2024-12-06 14:21:22.253295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.740 [2024-12-06 14:21:22.253311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c50c20 with addr=10.0.0.2, port=4420 00:27:33.740 [2024-12-06 14:21:22.253319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c20 is same with the state(6) to be set 00:27:33.740 [2024-12-06 14:21:22.253750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.740 [2024-12-06 14:21:22.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ea00 with addr=10.0.0.2, port=4420 00:27:33.740 [2024-12-06 14:21:22.253802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ea00 is same with the state(6) to be set 00:27:33.740 [2024-12-06 14:21:22.253819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a610 (9): Bad file descriptor 00:27:33.740 [2024-12-06 14:21:22.253880] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.740 [2024-12-06 14:21:22.253925] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.740 [2024-12-06 14:21:22.254273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.740 [2024-12-06 14:21:22.254648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.740 [2024-12-06 14:21:22.254656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.254986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.254995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.741 [2024-12-06 14:21:22.255218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.741 [2024-12-06 14:21:22.255225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.255397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.255406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2059730 is same with the state(6) to be set 00:27:33.742 [2024-12-06 14:21:22.255485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:33.742 [2024-12-06 14:21:22.255789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.742 [2024-12-06 14:21:22.255803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a8e00 with addr=10.0.0.2, port=4420 00:27:33.742 [2024-12-06 14:21:22.255811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e00 is same with the state(6) to be set 00:27:33.742 [2024-12-06 14:21:22.255822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50c20 (9): Bad file descriptor 00:27:33.742 [2024-12-06 14:21:22.255831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ea00 (9): Bad file descriptor 00:27:33.742 [2024-12-06 14:21:22.255840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:33.742 [2024-12-06 14:21:22.255848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:33.742 [2024-12-06 14:21:22.255857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:33.742 [2024-12-06 14:21:22.255866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:33.742 [2024-12-06 14:21:22.255886] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:27:33.742 [2024-12-06 14:21:22.257245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:33.742 [2024-12-06 14:21:22.257701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.742 [2024-12-06 14:21:22.257739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ac1e0 with addr=10.0.0.2, port=4420 00:27:33.742 [2024-12-06 14:21:22.257750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac1e0 is same with the state(6) to be set 00:27:33.742 [2024-12-06 14:21:22.257765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e00 (9): Bad file descriptor 00:27:33.742 [2024-12-06 14:21:22.257775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:33.742 [2024-12-06 14:21:22.257783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:33.742 [2024-12-06 14:21:22.257791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:33.742 [2024-12-06 14:21:22.257800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:33.742 [2024-12-06 14:21:22.257808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:33.742 [2024-12-06 14:21:22.257816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:33.742 [2024-12-06 14:21:22.257823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:33.742 [2024-12-06 14:21:22.257829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:33.742 [2024-12-06 14:21:22.257871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.257998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.742 [2024-12-06 14:21:22.258162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.742 [2024-12-06 14:21:22.258169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.743 [2024-12-06 14:21:22.258758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 14:21:22.258767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.258981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.258990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e56720 is same with the state(6) to be set 00:27:33.744 [2024-12-06 14:21:22.260280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.744 [2024-12-06 14:21:22.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.744 [2024-12-06 14:21:22.260742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.260995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.261387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.745 [2024-12-06 14:21:22.261395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57790 is same with the state(6) to be set 00:27:33.745 [2024-12-06 14:21:22.262672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.745 [2024-12-06 14:21:22.262686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.262991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.262998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.746 [2024-12-06 14:21:22.263196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.746 [2024-12-06 14:21:22.263204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.747 [2024-12-06 14:21:22.263753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.747 [2024-12-06 14:21:22.263760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.263769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.263777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.263787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.263794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.263802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2054a00 is same with the state(6) to be set 00:27:33.748 [2024-12-06 14:21:22.265122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.748 [2024-12-06 14:21:22.265577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.748 [2024-12-06 14:21:22.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.265989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.265998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.266006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.266017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.266024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.266034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.266041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.266051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.266058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.749 [2024-12-06 14:21:22.266068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.749 [2024-12-06 14:21:22.266075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.750 [2024-12-06 14:21:22.266231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.750 [2024-12-06 14:21:22.266240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055d60 is same with the state(6) to be set 00:27:33.750 [2024-12-06 14:21:22.268247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:33.750 [2024-12-06 14:21:22.268273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:33.750 [2024-12-06 14:21:22.268286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:33.750 task offset: 29312 on job bdev=Nvme7n1 fails 00:27:33.750 00:27:33.750 Latency(us) 00:27:33.750 [2024-12-06T13:21:22.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.750 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme1n1 ended in about 0.88 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme1n1 : 0.88 145.66 9.10 72.83 0.00 289363.63 19660.80 248162.99 00:27:33.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme2n1 : 0.88 145.26 9.08 72.63 0.00 283739.02 36481.71 228939.09 00:27:33.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme3n1 ended in about 0.87 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme3n1 : 0.87 221.91 13.87 73.97 0.00 203943.25 34515.63 225443.84 00:27:33.750 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme4n1 ended in about 0.87 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme4n1 : 0.87 221.61 13.85 73.87 0.00 199376.21 19442.35 228939.09 00:27:33.750 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme5n1 ended in about 0.88 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme5n1 : 0.88 144.87 9.05 72.43 0.00 265222.54 19660.80 246415.36 00:27:33.750 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme6n1 ended in about 0.89 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme6n1 : 0.89 144.47 9.03 72.23 0.00 259483.88 18350.08 237677.23 00:27:33.750 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme7n1 ended in about 0.86 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme7n1 : 0.86 222.63 13.91 74.21 0.00 183788.80 15728.64 200977.07 00:27:33.750 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme8n1 : 0.87 220.75 13.80 0.00 0.00 241108.20 18677.76 262144.00 00:27:33.750 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme9n1 ended in about 0.88 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme9n1 : 0.88 146.15 9.13 73.08 0.00 236656.07 18896.21 244667.73 00:27:33.750 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.750 Job: Nvme10n1 ended in about 0.87 seconds with error 00:27:33.750 Verification LBA range: start 0x0 length 0x400 00:27:33.750 Nvme10n1 : 0.87 147.51 9.22 73.76 0.00 227648.85 19333.12 263891.63 00:27:33.750 [2024-12-06T13:21:22.390Z] =================================================================================================================== 00:27:33.750 [2024-12-06T13:21:22.390Z] Total : 1760.82 110.05 659.01 0.00 235093.93 15728.64 263891.63 00:27:33.750 [2024-12-06 14:21:22.295299] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:33.750 [2024-12-06 14:21:22.295332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:33.750 [2024-12-06 14:21:22.295712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.750 [2024-12-06 14:21:22.295731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ac3d0 with addr=10.0.0.2, port=4420 00:27:33.750 [2024-12-06 14:21:22.295741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3d0 is same with the state(6) to be set 00:27:33.750 [2024-12-06 14:21:22.295756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac1e0 (9): Bad file descriptor 00:27:33.750 [2024-12-06 14:21:22.295766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:33.750 [2024-12-06 14:21:22.295773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:33.750 [2024-12-06 14:21:22.295781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:33.750 [2024-12-06 14:21:22.295790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.295851] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.295864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac3d0 (9): Bad file descriptor 00:27:33.751 [2024-12-06 14:21:22.296354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.296370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c52920 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.296378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c52920 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.296757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.296767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c524b0 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.296775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c524b0 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.297057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.297067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207cbe0 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.297074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207cbe0 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.297396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.297406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20738a0 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.297413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20738a0 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.297423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.297429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.297437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:33.751 [2024-12-06 14:21:22.297444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.297468] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.297480] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.297496] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.297507] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.297523] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:27:33.751 [2024-12-06 14:21:22.298829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:33.751 [2024-12-06 14:21:22.298846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:33.751 [2024-12-06 14:21:22.298855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:33.751 [2024-12-06 14:21:22.298864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:33.751 [2024-12-06 14:21:22.298920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c52920 (9): Bad file descriptor 00:27:33.751 [2024-12-06 14:21:22.298931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c524b0 (9): Bad file descriptor 00:27:33.751 [2024-12-06 14:21:22.298941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207cbe0 (9): Bad file descriptor 00:27:33.751 [2024-12-06 14:21:22.298951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20738a0 (9): Bad file descriptor 00:27:33.751 [2024-12-06 14:21:22.298960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.298966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.298974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:33.751 [2024-12-06 14:21:22.298981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.299050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:33.751 [2024-12-06 14:21:22.299243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.299256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6a610 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.299264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a610 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.299448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.299464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207ea00 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.299471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207ea00 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.299777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.299787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c50c20 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.299794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50c20 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.299985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.751 [2024-12-06 14:21:22.299994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a8e00 with addr=10.0.0.2, port=4420 00:27:33.751 [2024-12-06 14:21:22.300001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8e00 is same with the state(6) to be set 00:27:33.751 [2024-12-06 14:21:22.300012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.300019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.300026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:33.751 [2024-12-06 14:21:22.300032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.300040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.300046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.300053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:33.751 [2024-12-06 14:21:22.300060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.300067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.300073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.300080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:33.751 [2024-12-06 14:21:22.300087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:33.751 [2024-12-06 14:21:22.300094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:33.751 [2024-12-06 14:21:22.300100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:33.751 [2024-12-06 14:21:22.300106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:33.752 [2024-12-06 14:21:22.300363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.752 [2024-12-06 14:21:22.300374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ac1e0 with addr=10.0.0.2, port=4420 00:27:33.752 [2024-12-06 14:21:22.300381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac1e0 is same with the state(6) to be set 00:27:33.752 [2024-12-06 14:21:22.300390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6a610 (9): Bad file descriptor 00:27:33.752 [2024-12-06 14:21:22.300400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207ea00 (9): Bad file descriptor 00:27:33.752 [2024-12-06 14:21:22.300409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50c20 (9): Bad file descriptor 00:27:33.752 [2024-12-06 14:21:22.300419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a8e00 (9): Bad file descriptor 00:27:33.752 [2024-12-06 14:21:22.300449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac1e0 (9): Bad file descriptor 00:27:33.752 [2024-12-06 14:21:22.300464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:33.752 [2024-12-06 14:21:22.300471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:33.752 [2024-12-06 14:21:22.300478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:33.752 [2024-12-06 14:21:22.300492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:33.752 [2024-12-06 14:21:22.300501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:33.752 [2024-12-06 14:21:22.300509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:33.752 [2024-12-06 14:21:22.300522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:33.752 [2024-12-06 14:21:22.300528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:33.752 [2024-12-06 14:21:22.300535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:33.752 [2024-12-06 14:21:22.300549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:33.752 [2024-12-06 14:21:22.300555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:33.752 [2024-12-06 14:21:22.300562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:33.752 [2024-12-06 14:21:22.300607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:33.752 [2024-12-06 14:21:22.300616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:33.752 [2024-12-06 14:21:22.300623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:33.752 [2024-12-06 14:21:22.300629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:34.013 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2896908 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2896908 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2896908 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.958 rmmod nvme_tcp 00:27:34.958 rmmod nvme_fabrics 00:27:34.958 rmmod nvme_keyring 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2896544 ']' 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2896544 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2896544 ']' 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2896544 00:27:34.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2896544) - No such process 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2896544 is not found' 00:27:34.958 Process with pid 2896544 is not found 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.958 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.500 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.501 00:27:37.501 real 0m7.890s 00:27:37.501 user 0m19.750s 00:27:37.501 sys 0m1.228s 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:37.501 ************************************ 00:27:37.501 END TEST nvmf_shutdown_tc3 00:27:37.501 ************************************ 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.501 ************************************ 00:27:37.501 START TEST nvmf_shutdown_tc4 00:27:37.501 ************************************ 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:37.501 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:37.501 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:37.501 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.501 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:37.501 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.502 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:27:37.502 00:27:37.502 --- 10.0.0.2 ping statistics --- 00:27:37.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.502 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:27:37.502 00:27:37.502 --- 10.0.0.1 ping statistics --- 00:27:37.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.502 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2898137 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2898137 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2898137 ']' 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:37.502 [2024-12-06 14:21:26.134403] Starting SPDK v25.01-pre git sha1 6696ebaae / DPDK 24.03.0 initialization... 00:27:37.502 [2024-12-06 14:21:26.134452] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.763 [2024-12-06 14:21:26.214563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.763 [2024-12-06 14:21:26.253898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.763 [2024-12-06 14:21:26.253934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.763 [2024-12-06 14:21:26.253943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.763 [2024-12-06 14:21:26.253950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.763 [2024-12-06 14:21:26.253956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.763 [2024-12-06 14:21:26.255738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.763 [2024-12-06 14:21:26.255895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.763 [2024-12-06 14:21:26.256041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.763 [2024-12-06 14:21:26.256041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.336 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:38.336 [2024-12-06 14:21:26.970618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.598 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:38.598 Malloc1 00:27:38.598 [2024-12-06 14:21:27.081657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.598 Malloc2 00:27:38.598 Malloc3 00:27:38.598 Malloc4 00:27:38.598 Malloc5 00:27:38.859 Malloc6 00:27:38.859 Malloc7 00:27:38.859 Malloc8 00:27:38.859 Malloc9 00:27:38.859 Malloc10 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2898446 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:38.859 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:39.118 [2024-12-06 14:21:27.559973] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:44.425 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:44.425 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2898137 00:27:44.425 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2898137 ']' 00:27:44.425 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2898137 00:27:44.425 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898137 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898137' 00:27:44.426 killing process with pid 2898137 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2898137 00:27:44.426 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2898137 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 [2024-12-06 14:21:32.556947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 [2024-12-06 14:21:32.556991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 [2024-12-06 14:21:32.556997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 [2024-12-06 14:21:32.557002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 [2024-12-06 14:21:32.557007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 [2024-12-06 14:21:32.557012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 [2024-12-06 14:21:32.557018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 [2024-12-06 14:21:32.557022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 [2024-12-06 14:21:32.557027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 [2024-12-06 14:21:32.557032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308660 is same with the state(6) to be set 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 [2024-12-06 14:21:32.557446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 starting I/O failed: -6 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.426 Write completed with error (sct=0, sc=8) 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 [2024-12-06 14:21:32.558532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.427 starting I/O failed: -6 00:27:44.427 starting I/O failed: -6 00:27:44.427 starting I/O failed: -6 00:27:44.427 starting I/O failed: -6 00:27:44.427 starting I/O failed: -6 00:27:44.427 [2024-12-06 14:21:32.559345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306ad0 is same with the state(6) to be set 00:27:44.427 starting I/O failed: -6 00:27:44.427 starting I/O failed: -6 00:27:44.427 [2024-12-06 14:21:32.559579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306fa0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306fa0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306fa0 is same with the state(6) to be set 00:27:44.427 [2024-12-06 14:21:32.559613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306fa0 is same with the state(6) to be set 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.427 Write completed with error (sct=0, sc=8) 00:27:44.427 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 [2024-12-06 14:21:32.561697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.428 NVMe io qpair process completion error 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 [2024-12-06 14:21:32.562275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307cc0 is same with the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2307cc0 is same with the state(6) to be set 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 [2024-12-06 14:21:32.562617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 [2024-12-06 14:21:32.562637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 starting I/O failed: -6 00:27:44.428 [2024-12-06 14:21:32.562643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with Write completed with error (sct=0, sc=8) 00:27:44.428 the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2308190 is same with the state(6) to be set 00:27:44.428 [2024-12-06 14:21:32.562685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 starting I/O failed: -6 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.428 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 [2024-12-06 14:21:32.563490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.429 [2024-12-06 14:21:32.564393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.429 Write completed with error (sct=0, sc=8) 00:27:44.429 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 [2024-12-06 14:21:32.566170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.430 NVMe io qpair process completion error 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 Write completed with error (sct=0, sc=8) 00:27:44.430 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 [2024-12-06 14:21:32.567434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 [2024-12-06 14:21:32.568364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.431 Write completed with error (sct=0, sc=8) 00:27:44.431 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 [2024-12-06 14:21:32.569275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.432 starting I/O failed: -6 00:27:44.432 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 [2024-12-06 14:21:32.570723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.433 NVMe io qpair process completion error 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 [2024-12-06 14:21:32.571847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 [2024-12-06 14:21:32.572646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.433 Write completed with error (sct=0, sc=8) 00:27:44.433 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 [2024-12-06 14:21:32.573564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.434 starting I/O failed: -6 00:27:44.434 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 [2024-12-06 14:21:32.575621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.435 NVMe io qpair process completion error 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 [2024-12-06 14:21:32.576873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 [2024-12-06 14:21:32.577673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 starting I/O failed: -6 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.435 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 [2024-12-06 14:21:32.578589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.436 starting I/O failed: -6 00:27:44.436 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 [2024-12-06 14:21:32.580268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.437 NVMe io qpair process completion error 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 [2024-12-06 14:21:32.581523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 [2024-12-06 14:21:32.582360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.437 starting I/O failed: -6 00:27:44.437 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 [2024-12-06 14:21:32.583294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.438 Write completed with error (sct=0, sc=8) 00:27:44.438 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 [2024-12-06 14:21:32.585915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.439 NVMe io qpair process completion error 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 [2024-12-06 14:21:32.587032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 starting I/O failed: -6 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.439 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 [2024-12-06 14:21:32.587927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 [2024-12-06 14:21:32.588844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.440 Write completed with error (sct=0, sc=8) 00:27:44.440 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 [2024-12-06 14:21:32.590972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.441 NVMe io qpair process completion error 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 [2024-12-06 14:21:32.591945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.441 starting I/O failed: -6 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 Write completed with error (sct=0, sc=8) 00:27:44.441 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 [2024-12-06 14:21:32.592904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 [2024-12-06 14:21:32.593846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.442 Write completed with error (sct=0, sc=8) 00:27:44.442 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 [2024-12-06 14:21:32.595437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:44.443 NVMe io qpair process completion error 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 [2024-12-06 14:21:32.596449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.443 starting I/O failed: -6 00:27:44.443 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.444 Write completed with error (sct=0, sc=8) 00:27:44.444 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 Write completed with error (sct=0, sc=8) 00:27:44.445 starting I/O failed: -6 00:27:44.445 [2024-12-06 14:21:32.600944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:44.445 spdk_nvme_perf: sock.c:764: sock_group_impl_poll_count: Assertion `sock->cb_fn != NULL' failed. 00:27:44.445 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2898446 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2898446 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.408 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2898446 00:27:47.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 655: 2898446 Aborted (core dumped) $rootdir/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r "trtype:$TEST_TRANSPORT adrfam:IPV4 traddr:$NVMF_FIRST_TARGET_IP trsvcid:$NVMF_PORT" -P 4 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=134 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@664 -- # es=6 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@665 -- # case "$es" in 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@671 -- # es=0 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # trap - ERR 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # print_backtrace 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1159 -- # args=('2898446' 'wait' 'nvmf_shutdown_tc4' 'nvmf_shutdown_tc4' '--transport=tcp') 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1159 -- # local args 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1161 -- # xtrace_disable 00:27:47.954 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:47.955 ========== Backtrace start: ========== 00:27:47.955 00:27:47.955 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:679 -> NOT(["wait"],["2898446"]) 00:27:47.955 ... 00:27:47.955 674 elif [[ -n ${EXIT_STATUS:-} ]] && ((es != EXIT_STATUS)); then 00:27:47.955 675 es=0 00:27:47.955 676 fi 00:27:47.955 677 00:27:47.955 678 # invert error code of any command and also trigger ERR on 0 (unlike bash ! prefix) 00:27:47.955 => 679 ((!es == 0)) 00:27:47.955 680 } 00:27:47.955 681 00:27:47.955 682 function timing() { 00:27:47.955 683 direction="$1" 00:27:47.955 684 testname="$2" 00:27:47.955 ... 00:27:47.955 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh:158 -> nvmf_shutdown_tc4([]) 00:27:47.955 ... 00:27:47.955 153 00:27:47.955 154 # Kill the target half way through 00:27:47.955 155 killprocess $nvmfpid 00:27:47.955 156 sleep 1 00:27:47.955 157 # Due to IOs are completed with errors, perf exits with bad status 00:27:47.955 => 158 NOT wait $perfpid 00:27:47.955 159 stoptarget 00:27:47.955 160 } 00:27:47.955 161 00:27:47.955 162 run_test "nvmf_shutdown_tc1" nvmf_shutdown_tc1 00:27:47.955 163 run_test "nvmf_shutdown_tc2" nvmf_shutdown_tc2 00:27:47.955 ... 00:27:47.955 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_shutdown_tc4"],["nvmf_shutdown_tc4"]) 00:27:47.955 ... 00:27:47.955 1124 timing_enter $test_name 00:27:47.955 1125 echo "************************************" 00:27:47.955 1126 echo "START TEST $test_name" 00:27:47.955 1127 echo "************************************" 00:27:47.955 1128 xtrace_restore 00:27:47.955 1129 time "$@" 00:27:47.955 1130 xtrace_disable 00:27:47.955 1131 echo "************************************" 00:27:47.955 1132 echo "END TEST $test_name" 00:27:47.955 1133 echo "************************************" 00:27:47.955 1134 timing_exit $test_name 00:27:47.955 ... 00:27:47.955 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh:167 -> main(["--transport=tcp"]) 00:27:47.955 ... 00:27:47.955 162 run_test "nvmf_shutdown_tc1" nvmf_shutdown_tc1 00:27:47.955 163 run_test "nvmf_shutdown_tc2" nvmf_shutdown_tc2 00:27:47.955 164 run_test "nvmf_shutdown_tc3" nvmf_shutdown_tc3 00:27:47.955 165 # Temporarily disable on e810 due to issue #3523 00:27:47.955 166 if ! [[ "$SPDK_TEST_NVMF_NICS" == "e810" && "$TEST_TRANSPORT" == "rdma" ]]; then 00:27:47.955 => 167 run_test "nvmf_shutdown_tc4" nvmf_shutdown_tc4 00:27:47.955 168 fi 00:27:47.955 169 00:27:47.955 170 trap - SIGINT SIGTERM EXIT 00:27:47.955 ... 00:27:47.955 00:27:47.955 ========== Backtrace end ========== 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1198 -- # return 0 00:27:47.955 00:27:47.955 real 0m10.652s 00:27:47.955 user 0m27.980s 00:27:47.955 sys 0m3.933s 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@812 -- # type=--id 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@813 -- # id=0 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@824 -- # for n in $shm_files 00:27:47.955 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:47.955 nvmf_trace.0 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@827 -- # return 0 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # kill -9 2898446 00:27:48.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 1: kill: (2898446) - No such process 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # true 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # nvmftestfini 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.217 rmmod nvme_tcp 00:27:48.217 rmmod nvme_fabrics 00:27:48.217 rmmod nvme_keyring 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2898137 ']' 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2898137 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2898137 ']' 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2898137 00:27:48.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2898137) - No such process 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2898137 is not found' 00:27:48.217 Process with pid 2898137 is not found 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.217 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # exit 1 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # trap - ERR 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # print_backtrace 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh' 'nvmf_shutdown' '--transport=tcp') 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1159 -- # local args 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1161 -- # xtrace_disable 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.763 ========== Backtrace start: ========== 00:27:50.763 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_shutdown"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh"],["--transport=tcp"]) 00:27:50.763 ... 00:27:50.763 1124 timing_enter $test_name 00:27:50.763 1125 echo "************************************" 00:27:50.763 1126 echo "START TEST $test_name" 00:27:50.763 1127 echo "************************************" 00:27:50.763 1128 xtrace_restore 00:27:50.763 1129 time "$@" 00:27:50.763 1130 xtrace_disable 00:27:50.763 1131 echo "************************************" 00:27:50.763 1132 echo "END TEST $test_name" 00:27:50.763 1133 echo "************************************" 00:27:50.763 1134 timing_exit $test_name 00:27:50.763 ... 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh:65 -> main(["--transport=tcp"]) 00:27:50.763 ... 00:27:50.763 60 elif [[ $SPDK_TEST_NVMF_TRANSPORT == "rdma" ]]; then 00:27:50.763 61 # Disabled due to https://github.com/spdk/spdk/issues/3345 00:27:50.763 62 # run_test "nvmf_device_removal" test/nvmf/target/device_removal.sh "${TEST_ARGS[@]}" 00:27:50.763 63 run_test "nvmf_srq_overwhelm" "$rootdir/test/nvmf/target/srq_overwhelm.sh" "${TEST_ARGS[@]}" 00:27:50.763 64 fi 00:27:50.763 => 65 run_test "nvmf_shutdown" $rootdir/test/nvmf/target/shutdown.sh "${TEST_ARGS[@]}" 00:27:50.763 66 fi 00:27:50.763 67 run_test "nvmf_nsid" "$rootdir/test/nvmf/target/nsid.sh" "${TEST_ARGS[@]}" 00:27:50.763 68 00:27:50.763 69 trap - SIGINT SIGTERM EXIT 00:27:50.763 ... 00:27:50.763 00:27:50.763 ========== Backtrace end ========== 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1198 -- # return 0 00:27:50.763 00:27:50.763 real 0m46.602s 00:27:50.763 user 1m47.442s 00:27:50.763 sys 0m13.914s 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1 -- # exit 1 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # trap - ERR 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # print_backtrace 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh' 'nvmf_target_extra' '--transport=tcp') 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1159 -- # local args 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1161 -- # xtrace_disable 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:50.763 ========== Backtrace start: ========== 00:27:50.763 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_target_extra"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh"],["--transport=tcp"]) 00:27:50.763 ... 00:27:50.763 1124 timing_enter $test_name 00:27:50.763 1125 echo "************************************" 00:27:50.763 1126 echo "START TEST $test_name" 00:27:50.763 1127 echo "************************************" 00:27:50.763 1128 xtrace_restore 00:27:50.763 1129 time "$@" 00:27:50.763 1130 xtrace_disable 00:27:50.763 1131 echo "************************************" 00:27:50.763 1132 echo "END TEST $test_name" 00:27:50.763 1133 echo "************************************" 00:27:50.763 1134 timing_exit $test_name 00:27:50.763 ... 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh:15 -> main(["--transport=tcp"]) 00:27:50.763 ... 00:27:50.763 10 if [ ! $(uname -s) = Linux ]; then 00:27:50.763 11 exit 0 00:27:50.763 12 fi 00:27:50.763 13 00:27:50.763 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 => 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 17 00:27:50.763 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:27:50.763 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:27:50.763 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:27:50.763 ... 00:27:50.763 00:27:50.763 ========== Backtrace end ========== 00:27:50.763 14:21:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1198 -- # return 0 00:27:50.763 00:27:50.763 real 12m48.009s 00:27:50.763 user 26m59.657s 00:27:50.763 sys 3m46.872s 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # trap - ERR 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # print_backtrace 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1159 -- # local args 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1161 -- # xtrace_disable 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.763 ========== Backtrace start: ========== 00:27:50.763 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:27:50.763 ... 00:27:50.763 1124 timing_enter $test_name 00:27:50.763 1125 echo "************************************" 00:27:50.763 1126 echo "START TEST $test_name" 00:27:50.763 1127 echo "************************************" 00:27:50.763 1128 xtrace_restore 00:27:50.763 1129 time "$@" 00:27:50.763 1130 xtrace_disable 00:27:50.763 1131 echo "************************************" 00:27:50.763 1132 echo "END TEST $test_name" 00:27:50.763 1133 echo "************************************" 00:27:50.763 1134 timing_exit $test_name 00:27:50.763 ... 00:27:50.763 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:284 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:27:50.763 ... 00:27:50.763 279 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:27:50.763 280 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:27:50.763 281 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 282 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 283 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:27:50.763 => 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:27:50.763 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:27:50.763 288 fi 00:27:50.763 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:27:50.763 ... 00:27:50.763 00:27:50.763 ========== Backtrace end ========== 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1198 -- # return 0 00:27:50.763 00:27:50.763 real 17m53.445s 00:27:50.763 user 39m2.093s 00:27:50.763 sys 5m39.651s 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1396 -- # local autotest_es=1 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:50.763 14:21:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.762 ##### CORE BT spdk_nvme_perf_2898446.core.bt.txt ##### 00:28:00.762 00:28:00.762 gdb: warning: Couldn't determine a path for the index cache directory. 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_0 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_1 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_2 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_3 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_4 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_5 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_6 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_25 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_26 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_27 (deleted) during file-backed mapping note processing 00:28:00.762 00:28:00.762 warning: Can't open file /dev/hugepages/spdk_pid2898446map_28 (deleted) during file-backed mapping note processing 00:28:00.762 [New LWP 2898446] 00:28:00.762 [New LWP 2898448] 00:28:00.762 [New LWP 2898525] 00:28:00.762 [Thread debugging using libthread_db enabled] 00:28:00.762 Using host libthread_db library "/usr/lib64/libthread_db.so.1". 00:28:00.762 Core was generated by `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1'. 00:28:00.762 Program terminated with signal SIGABRT, Aborted. 00:28:00.762 #0 0x00007f0c27b8f834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:28:00.762 [Current thread is 1 (Thread 0x7f0c2709fa00 (LWP 2898446))] 00:28:00.762 00:28:00.762 Thread 3 (Thread 0x7f0c256006c0 (LWP 2898525)): 00:28:00.762 #0 0x00007f0c27bd8163 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #1 0x00007f0c27beac97 in nanosleep () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #2 0x00007f0c27bfc5d3 in sleep () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #3 0x0000000000420d0b in nvme_poll_ctrlrs (arg=0x0) at perf.c:3219 00:28:00.762 entry = 0x0 00:28:00.762 oldstate = 1 00:28:00.762 rc = 0 00:28:00.762 #4 0x00007f0c27b8d897 in start_thread () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #5 0x00007f0c27c14a5c in clone3 () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 00:28:00.762 Thread 2 (Thread 0x7f0c270006c0 (LWP 2898448)): 00:28:00.762 #0 0x00007f0c27c14e62 in epoll_wait () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #1 0x00007f0c286d9867 in eal_intr_handle_interrupts (pfd=5, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:28:00.762 events = {{events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}}} 00:28:00.762 nfds = 0 00:28:00.762 #2 0x00007f0c286d9aa4 in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:28:00.762 pipe_event = {events = 3, data = {ptr = 0x3, fd = 3, u32 = 3, u64 = 3}} 00:28:00.762 src = 0x0 00:28:00.762 numfds = 1 00:28:00.762 pfd = 5 00:28:00.762 __func__ = "eal_intr_thread_main" 00:28:00.762 #3 0x00007f0c286b7ce7 in control_thread_start (arg=0x1fc9dd0) at ../lib/eal/common/eal_common_thread.c:282 00:28:00.762 params = 0x1fc9dd0 00:28:00.762 start_arg = 0x0 00:28:00.762 start_routine = 0x7f0c286d98d6 00:28:00.762 #4 0x00007f0c286d0a30 in thread_start_wrapper (arg=0x7ffd316ea8c0) at ../lib/eal/unix/rte_thread.c:114 00:28:00.762 ctx = 0x7ffd316ea8c0 00:28:00.762 thread_func = 0x7f0c286b7c98 00:28:00.762 thread_args = 0x1fc9dd0 00:28:00.762 ret = 0 00:28:00.762 #5 0x00007f0c27b8d897 in start_thread () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #6 0x00007f0c27c14a5c in clone3 () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 00:28:00.762 Thread 1 (Thread 0x7f0c2709fa00 (LWP 2898446)): 00:28:00.762 #0 0x00007f0c27b8f834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #1 0x00007f0c27b3d8ee in raise () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #2 0x00007f0c27b258ff in abort () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #3 0x00007f0c27b2581b in __assert_fail_base.cold () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #4 0x00007f0c27b35c57 in __assert_fail () from /usr/lib64/libc.so.6 00:28:00.762 No symbol table info available. 00:28:00.762 #5 0x00007f0c28a87572 in sock_group_impl_poll_count (group_impl=0x2332be0, group=0x231e6d0, max_events=32) at sock.c:764 00:28:00.762 sock = 0x2357050 00:28:00.762 socks = {0x233ecd0, 0x2332990, 0x234ae90, 0x2357050, 0x10000000200, 0xf44780, 0x2000047a7200, 0x7ffd316eaab0, 0x2051a10, 0x7f0c28bd9cfc , 0x58, 0x7ffd316eaab0, 0x7ffd316ea9f0, 0x7f0c28c0d2a6 , 0x200000f44780, 0x243d9e0, 0x0, 0x0, 0x1, 0x7ffd316eaab0, 0x7ffd316eaaa0, 0x7f0c28bd9d2e , 0xb000, 0x47a7180, 0x24ebe50, 0x413146 , 0x58ffff0000, 0x23ecbe0, 0x243d9e0, 0x0, 0x7ffd316eaa50, 0x23ecbe0} 00:28:00.762 num_events = 4 00:28:00.762 i = 3 00:28:00.762 __PRETTY_FUNCTION__ = "sock_group_impl_poll_count" 00:28:00.762 #6 0x00007f0c28a877e0 in spdk_sock_group_poll_count (group=0x231e6d0, max_events=32) at sock.c:791 00:28:00.762 group_impl = 0x2332be0 00:28:00.762 rc = 0 00:28:00.762 num_events = 0 00:28:00.762 __func__ = "spdk_sock_group_poll_count" 00:28:00.762 #7 0x00007f0c28a87223 in spdk_sock_group_poll (group=0x231e6d0) at sock.c:742 00:28:00.762 No locals. 00:28:00.762 #8 0x00007f0c28c57099 in nvme_tcp_poll_group_process_completions (tgroup=0x2332ae0, completions_per_qpair=0, disconnected_qpair_cb=0x40b953 ) at nvme_tcp.c:2829 00:28:00.763 group = 0x2332ae0 00:28:00.763 qpair = 0x7f0c27b83a9f 00:28:00.763 tmp_qpair = 0x434520 00:28:00.763 tqpair = 0x0 00:28:00.763 tmp_tqpair = 0x27 00:28:00.763 num_events = 32765 00:28:00.763 #9 0x00007f0c28c2851c in nvme_transport_poll_group_process_completions (tgroup=0x2332ae0, completions_per_qpair=0, disconnected_qpair_cb=0x40b953 ) at nvme_transport.c:780 00:28:00.763 No locals. 00:28:00.763 #10 0x00007f0c28c6e068 in spdk_nvme_poll_group_process_completions (group=0x2326830, completions_per_qpair=0, disconnected_qpair_cb=0x40b953 ) at nvme_poll_group.c:350 00:28:00.763 tgroup = 0x2332ae0 00:28:00.763 local_completions = 0 00:28:00.763 error_reason = 0 00:28:00.763 num_completions = 0 00:28:00.763 __PRETTY_FUNCTION__ = "spdk_nvme_poll_group_process_completions" 00:28:00.763 #11 0x000000000040bbb3 in nvme_check_io (ns_ctx=0x1fc0aa0) at perf.c:963 00:28:00.763 rc = 139690196037078 00:28:00.763 #12 0x000000000041614b in work_fn (arg=0x1fc6a20) at perf.c:1792 00:28:00.763 all_draining = true 00:28:00.763 tsc_start = 41433294235850373 00:28:00.763 tsc_end = 41433342235850373 00:28:00.763 tsc_current = 41433305332032303 00:28:00.763 tsc_next_print = 41433306235850373 00:28:00.763 worker = 0x1fc6a20 00:28:00.763 ns_ctx = 0x1fc0aa0 00:28:00.763 unfinished_ns_ctx = 829336704 00:28:00.763 warmup = false 00:28:00.763 rc = -1 00:28:00.763 check_rc = -1 00:28:00.763 check_now = 41433305402118303 00:28:00.763 swap = {tqh_first = 0xf, tqh_last = 0x7f0c288758c2 } 00:28:00.763 task = 0x42099f 00:28:00.763 #13 0x0000000000421924 in main (argc=15, argv=0x7ffd316eaec8) at perf.c:3376 00:28:00.763 rc = 0 00:28:00.763 worker = 0x0 00:28:00.763 main_worker = 0x1fc6a20 00:28:00.763 ns_ctx = 0x7f0c27b9d373 <_int_malloc+3715> 00:28:00.763 opts = {name = 0x43761a "perf", core_mask = 0x7f0c288806bc "0x1", lcore_map = 0x0, shm_id = -1, mem_channel = -1, main_core = -1, mem_size = -1, no_pci = true, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved = 0, num_pci_addr = 0, hugedir = 0x0, pci_blocked = 0x0, pci_allowed = 0x453a20 , iova_mode = 0x0, base_virtaddr = 35184372088832, env_context = 0x0, vf_token = 0x0, opts_size = 128, enforce_numa = false, reserved2 = "\000\000\000\000\000\000"} 00:28:00.763 thread_id = 139690143385280 00:28:00.763 __PRETTY_FUNCTION__ = "main" 00:28:00.763 00:28:00.763 -- 00:28:08.898 INFO: APP EXITING 00:28:08.898 INFO: killing all VMs 00:28:08.898 INFO: killing vhost app 00:28:08.898 INFO: EXIT DONE 00:28:12.207 Waiting for block devices as requested 00:28:12.207 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.207 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.467 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:12.467 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.728 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.728 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.728 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.989 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.989 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.989 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:13.250 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:16.565 Cleaning 00:28:16.565 Removing: /var/run/dpdk/spdk0/config 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:16.565 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:16.565 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:16.565 Removing: /var/run/dpdk/spdk1/config 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:16.826 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:16.826 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:16.826 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:16.826 Removing: /var/run/dpdk/spdk2/config 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:16.826 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:16.826 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:16.826 Removing: /var/run/dpdk/spdk3/config 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:16.826 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:16.826 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:16.826 Removing: /var/run/dpdk/spdk4/config 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:16.826 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:16.826 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:16.826 Removing: /dev/shm/bdev_svc_trace.1 00:28:16.826 Removing: /dev/shm/nvmf_trace.0 00:28:16.826 Removing: /dev/shm/spdk_tgt_trace.pid2587430 00:28:16.826 Removing: /var/run/dpdk/spdk0 00:28:16.826 Removing: /var/run/dpdk/spdk1 00:28:16.826 Removing: /var/run/dpdk/spdk2 00:28:16.826 Removing: /var/run/dpdk/spdk3 00:28:16.826 Removing: /var/run/dpdk/spdk4 00:28:16.826 Removing: /var/run/dpdk/spdk_pid2585771 00:28:16.826 Removing: /var/run/dpdk/spdk_pid2587430 00:28:16.826 Removing: /var/run/dpdk/spdk_pid2588103 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2589155 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2589487 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2590597 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2590881 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2591189 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2592184 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2592940 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2593341 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2593740 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2594149 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2594518 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2594686 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2595054 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2595440 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2596778 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2600552 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2600920 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2601155 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2601296 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2601671 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2601987 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2602374 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2602410 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2602757 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2603065 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2603136 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2603461 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2603915 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2604266 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2604605 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2609188 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2614574 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2626669 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2627353 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2632572 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2633057 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2638182 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2645277 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2648945 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2661636 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2672528 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2674631 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2675853 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2696558 00:28:17.087 Removing: /var/run/dpdk/spdk_pid2701437 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2757766 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2764728 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2771646 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2779673 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2779729 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2780844 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2781860 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2782883 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2783535 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2783554 00:28:17.088 Removing: /var/run/dpdk/spdk_pid2783873 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2783902 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2783907 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2784912 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2785922 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2786936 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2787856 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2787926 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2788206 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2789569 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2790775 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2800490 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2834243 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2839657 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2841647 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2844101 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2844583 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2844958 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2845272 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2846010 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2848332 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2849537 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2850130 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2852840 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2853546 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2854368 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2859323 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2866022 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2866023 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2866024 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2870716 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2880970 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2885700 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2893120 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2895074 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2896908 00:28:17.350 Removing: /var/run/dpdk/spdk_pid2898446 00:28:17.350 Clean 00:28:17.611 14:22:06 nvmf_tcp -- common/autotest_common.sh@1453 -- # return 1 00:28:17.611 14:22:06 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:28:17.611 14:22:06 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:28:17.611 14:22:06 -- spdk/autorun.sh@27 -- $ trap - ERR 00:28:17.611 14:22:06 -- spdk/autorun.sh@27 -- $ print_backtrace 00:28:17.611 14:22:06 -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]] 00:28:17.611 14:22:06 -- common/autotest_common.sh@1159 -- $ args=('/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:28:17.611 14:22:06 -- common/autotest_common.sh@1159 -- $ local args 00:28:17.611 14:22:06 -- common/autotest_common.sh@1161 -- $ xtrace_disable 00:28:17.611 14:22:06 -- common/autotest_common.sh@10 -- $ set +x 00:28:17.611 ========== Backtrace start: ========== 00:28:17.611 00:28:17.611 in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:28:17.611 ... 00:28:17.611 22 trap 'timing_finish || exit 1' EXIT 00:28:17.611 23 00:28:17.611 24 # Runs agent scripts 00:28:17.611 25 $rootdir/autobuild.sh "$conf" 00:28:17.611 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:28:17.611 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:28:17.611 28 fi 00:28:17.611 ... 00:28:17.611 00:28:17.611 ========== Backtrace end ========== 00:28:17.611 14:22:06 -- common/autotest_common.sh@1198 -- $ return 0 00:28:17.611 14:22:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:28:17.611 14:22:06 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:28:17.611 14:22:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:17.611 14:22:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:28:17.611 14:22:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:17.624 [Pipeline] } 00:28:17.641 [Pipeline] // stage 00:28:17.648 [Pipeline] } 00:28:17.660 [Pipeline] // timeout 00:28:17.667 [Pipeline] } 00:28:17.670 ERROR: script returned exit code 1 00:28:17.670 Setting overall build result to FAILURE 00:28:17.683 [Pipeline] // catchError 00:28:17.688 [Pipeline] } 00:28:17.701 [Pipeline] // wrap 00:28:17.706 [Pipeline] } 00:28:17.722 [Pipeline] // catchError 00:28:17.734 [Pipeline] stage 00:28:17.738 [Pipeline] { (Epilogue) 00:28:17.753 [Pipeline] catchError 00:28:17.755 [Pipeline] { 00:28:17.771 [Pipeline] echo 00:28:17.773 Cleanup processes 00:28:17.778 [Pipeline] sh 00:28:18.067 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.067 2566852 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:28:18.067 2566888 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733490077 00:28:18.067 2911078 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.079 [Pipeline] sh 00:28:18.362 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:18.362 ++ grep -v 'sudo pgrep' 00:28:18.362 ++ awk '{print $1}' 00:28:18.362 + sudo kill -9 2566852 2566888 00:28:18.374 [Pipeline] sh 00:28:18.663 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:25.258 [Pipeline] sh 00:28:25.549 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:25.549 Artifacts sizes are good 00:28:25.566 [Pipeline] archiveArtifacts 00:28:25.574 Archiving artifacts 00:28:26.221 [Pipeline] sh 00:28:26.662 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:26.674 [Pipeline] cleanWs 00:28:26.681 [WS-CLEANUP] Deleting project workspace... 00:28:26.681 [WS-CLEANUP] Deferred wipeout is used... 00:28:26.688 [WS-CLEANUP] done 00:28:26.689 [Pipeline] } 00:28:26.702 [Pipeline] // catchError 00:28:26.711 [Pipeline] echo 00:28:26.712 Tests finished with errors. Please check the logs for more info. 00:28:26.714 [Pipeline] echo 00:28:26.716 Execution node will be rebooted. 00:28:26.728 [Pipeline] build 00:28:26.730 Scheduling project: reset-job 00:28:26.741 [Pipeline] sh 00:28:27.027 + logger -p user.err -t JENKINS-CI 00:28:27.038 [Pipeline] } 00:28:27.051 [Pipeline] // stage 00:28:27.057 [Pipeline] } 00:28:27.070 [Pipeline] // node 00:28:27.075 [Pipeline] End of Pipeline 00:28:27.111 Finished: FAILURE